Chronically stressful life events have been shown to lead to depression. Chronic stress leads to hyperactivity of HPA axis leading to more glucocorticoids (cortisol) in the human body. This excess cortisol in term is proposed to underlie the affective symptoms of depression. Also, depressive people have been found to have up to 20% smaller hippocampal volume, and a recent theory is gaining ground that depression is due to reduced neurogenesis. Even if the entire spectrum of depressive symptoms is not due to reduced neurogenesis and atrophied or smaller hippocampus, at least the cognitive symptoms of depression are largely due to this.
I stumbled upon a commentary by Robert Sapolsky that although is 10 years old, but I still found interesting and worth bringing to notice of my dear readers. In it Sapolsky looks at a study by Czeh et al that found evidence linking reduced proliferation in dentate gyrus and a shrunken hippocampus to depressive stress as modeled by psycho-social stress paradigm in tree shrew. Also, they found that an antidepressant, tianeptine, reversed the effects of stress by restoring proliferation and hippocampus size and thus reversing symptoms of depression. However the level of glucorticiods were still higher, after anti-depressant treatment, and thus it is apparent that anti-depressants work downstream of stress induced increase in glucorticoids.
Sapolsky believes that the data support either of models presented in figure 1A or figure 1B i.e. the increased glucocrticoids can lead to shrinkage of hippocampus directly or through their effect on affective symptoms. I believe figure 1C is also possible and its not necessarily incompatible with 1A or 1B and that increased stress may lead to increased cortisol- may lead to reduced neurogenesis may lead to shrinkage of hippocampus and which may in turn lead to affective and cognitive symptoms.
An alternative to reduced neurogenesis/ proliferation theory is the dendritic atrophy/ neurotoxicity theory that posits that shrinkage of hippocampus is due to cell death/ white matter loss. This again is a possibility but the evidence in favor of reduced neurogenesis is growing and becoming strong by the day.
Overall the new paradigms in depression research that look beyond serotonin or mono amine imbalance is a welcome trend and hopefully would lead to better interventions and prevention strategies and not just better pharmaceutical innovations. Its time one realized the rile chronic stress play sin depression and how that can be easily prevented to reduce the mental health burden.
Sapolsky, R. (2001). Depression, antidepressants, and the shrinking hippocampus Proceedings of the National Academy of Sciences, 98 (22), 12320-12322 DOI: 10.1073/pnas.231475998 Czeh, B. (2001). Stress-induced changes in cerebral metabolites, hippocampal volume, and cell proliferation are prevented by antidepressant treatment with tianeptine Proceedings of the National Academy of Sciences, 98 (22), 12796-12801 DOI: 10.1073/pnas.211427898
There is a recent article by Pronin and Jacobs, on the relationship between mood, thought speed and experience of ‘mental motion’ that builds up on their previous work.
Let us see how they describe thought speed and variability and what their hypothesis is:
1. The principle of thought speed. Fast thinking, which involves many thoughts per unit time, generally produces positive affect. Slow thinking, which involves few thoughts per unit time, generally produces less positive affect. At the extremes of thought speed, racing thoughts can elicit feelings of mania, and sluggish thoughts can elicit feelings of depression.
2. The principle of thought variability. Varied thinking generally produces positive affect, whereas repetitive thinking generally produces negative affect. This principle is derived in part from the speed principle: when thoughts are repetitive, thought speed (thoughts per unit time) diminishes. At its extremes, repetitive thinking can elicit feelings of depression (or anxiety), and varied thinking can elicit feelings of mania (or reverie).
Let me clarify at the outset that they are aware of the effects of though speed on variability and vice versa; as well as the effects of mood on felt energy and vice versa; thus they know that one can confound the other. Another angle they consider is the relationship between thought speed/variability i.e the form of thought and the contents of thought (whether having emotional salience or neutral) and investigated whether the effects of speed and variability were confounded with though content; they found negative evidence for this inetrcationist view.
Let me also clarify that I differ slightly (based on my interpreation of their data) from their original hypothesis, in the sense that I believe that their data shows that speed affects felt energy and variability affects affect and that the effects of speed on mood may be mediated by the effect of speed on felt energy and similarly the effect of variability on felt energy may be mediated by its effects on mood.
Thus my claim is that:
Thought speed leads to more felt energy. Extremes of ‘racing thoughts’ leads to the manic feeling of being very energetic (when accompanied with positive mood, this may give rise to feelings of grandiosity- I have the energy to achieve anything), while also may lead to anxiety states (when accompanied with negative affect) in which one cannot really suppress a negative chain of thoughts – one following the other in fast succession, regarding the object of ones anxiety. The counterpart to this the state where thoughts come slowly (writer’s block etc) and when accompanied with negative affect, this can easily be viewed as depression.
Thought variability leads to more positive affect: Extremes of ‘tangential thoughts’ leads to the manic feeling of being in a good mood (when accompanied with high energy , this manifest as feelings of euphoria); while the same tangential thoughts when accompanied by low felt energy may actually be felt as serenity/ calmness/ reverie. The counterpart to this is the state of thoughts that are stuck in a rut – when accompanied with low energy this leads to feelings of depression and sadness.
Thus, to put simply : there are two dimensions one needs to take care of – mood (thought variability) x energy (thought speed) and high and low extremes on these dimensions are all opposites of their counterpart.
Before we move on, I’ll let the authors present their other two claims too:
3. The combination principle. Fast, varied thinking prompts elation; slow, repetitive thinking prompts dejection. When speed and variability oppose each other, such that one is low and the other high, individuals’ affective experience will depend on factors including which one of the two factors is more extreme. The psychological state elicited by such combinations can vary apart from its valence, as shown in Figure 1. For example, repetitive thinking can elicit feelings of anxiety rather than depression if that repetitive thinking is rapid. Notably, anxious states generally are more energetic than depressive states. Moreover, just as fast-moving physical objects possess more energy than do identical slower objects, fast thinking involves more energy (e.g., greater wakefulness, arousal, and feelings of energy) than does slow thinking.
4. The content independence principle. Effects of thought speed and variability are independent of the specific nature of thought content. Powerful affective states such as depression and anxiety have been traced to irrational and dysfunctional cognitions (e.g., Beck, 1976). According to the independence principle, effects of mental motion on mood do not require any particular type of thought content.
They review a number of factors and studies that all point to a causal link between thought speed and energy and between thought variability and mood. More importantly they show the independent effects of though speed and variability from the effects of thought content on mood. I’ll not go into the details of the studies and experiments they performed, as their article is available freely online and one can read for oneself (it makes for excellent reading); suffice it to say that I believe they are on the right track and have evidence to back their claims.
What are the implications of this:
The speed and repetition of thoughts, we suggest, could be manipulated in order to alter and alleviate some of the mood and energy symptoms of mental disorders. The slow and repetitive aspects of depressive thinking, for example, seem to contribute to the disorder’s affective symptoms (e.g., Ianzito et al., 1974; Judd et al., 1994; Nolen-Hoeksema, 1991; Philipp et al., 1991; Segerstrom et al., 2000). Thus, techniques that are effective in speeding cognition and in breaking the cycle of repetitive thought may be useful in improving the mood and energy levels of depressed patients. The potential of this sort of treatment is suggested by Pronin and Wegner’s (2006) study, in which speeding participants’ cognitions led to improved mood and energy, even when those cognitions were negative, self-referential, and decidedly depressing. It also is suggested by Gortner et al.’s (2006) finding that an expressive writing manipulation that decreased rumination (even while inducing thoughts about an upsetting experience) rendered recurrent depression less likely.
There also is some evidence suggesting that speeding up even low-level cognition may improve mood in clinically depressed patients. In one experiment, Teasdale and Rezin (1978) instructed depressed participants to repeat aloud one of four letters of the alphabet (A, B, C, or D) presented in random order every 1, 2, or 4 s. They found that those participants required to repeat the letters at the fastest rate experienced the most reduction in depressed mood. Similar techniques could be tested for the treatment of other mental illnesses. For example, manipulations might be designed to decrease the mental motion of manic patients, perhaps by introducing repetitive and slow cognitive stimuli. Or, in the case of anxiety disorders, it would be worthwhile to test interventions aimed at inducing slow and varied thought (as opposed to the fast and repetitive thought characteristic of anxiety). The potential effectiveness of such interventions is supported by the fact that mindfulness meditation, which involves slow but varied thinking, can lessen anxiety, stress, and arousal.
Pronin, E., & Jacobs, E. (2008). Thought Speed, Mood, and the Experience of Mental Motion Perspectives on Psychological Science, 3 (6), 461-485 DOI: 10.1111/j.1745-6924.2008.00091.x Pronin, E., & Wegner, D. (2006). Manic Thinking: Independent Effects of Thought Speed and Thought Content on Mood Psychological Science, 17 (9), 807-813 DOI: 10.1111/j.1467-9280.2006.01786.x
Daniel Nettle, writes an article in Journal Of Theoretical Biology about the evolution of low mood states. Before I get to his central thesis, let us review what he reviews:
Low mood describes a temporary emotional and physiological state in humans, typically characterised by fatigue, loss of motivation and interest, anhedonia (loss of pleasure in previously pleasurable activities), pessimism about future actions, locomotor retardation, and other symptoms such as crying. … This paper focuses on a central triad of symptoms which are common across many types of low mood, namely anhedonia, fatigue and pessimism. Theorists have argued that, whereas their opposites facilitate novel and risky behavioural projects. These symptoms function to reduce risk-taking. They do this, proximately, by making the potential payoffs seem insufficiently rewarding (anhedonia), the energy required seem too great (fatigue), or the probability of success seem insufficiently high (pessimism). An evolutionary hypothesis for why low mood has these features, then, is that is adaptive to avoid risky behaviours when one is in a relatively poor current state, since one would not be able to bear the costs of unsuccessful risky endeavors at such times .
I would like to pause here and note how he has beautifully summed up the low mood symptoms and key features; taking liberty to define using my own framework of Value X Expectancy and distinction between cognitive(‘wanting’) and behavioral (‘liking’) side of things :
Anhedonia: behavioral inability to feel rewarded by previously pleasurable activities. Loss of ‘liking’ following the act. Less behavioral Value assigned.
Loss of motivation and interest: cognitive inability to look forward to or value previously desired activities. Loss of ‘wanting’ prior to the act. Less cognitive Value assigned.
Fatigue: behavioral inability to feel that one can achieve the desired outcome due to feelings that one does not have sufficient energy to carry the act to success. Less behavioral Expectancy assigned.
Pessimism: cognitive inability to look forward to or expect good things about the future or that good outcomes are possible. Less cognitive Expectancy assigned.
The reverse conglomeration is found in high mood- High wanting and liking, high energy and outlook. Thus, I agree with Nettle fully that low mood and high mood are defined by these opposed features and also that these features of low and high mood are powerful proximate mechanisms that determine the risk proneness of the individual: by subjectively manipulating the Value and Expectancy associated with an outcome, the high and low mood mediate the risk proneness that an organism would display while assigning a utility to the action. Thus, it is fairly settled: if ultimate goal is to increase risk-prone behavior than the organism should use the proximate mechanism of high mood; if the ultimate goal is to avoid risky behavior, then the organism should display low mood which would proximately help it avoid risky behavior.
Now let me talk about Nettle’s central thesis. It has been previously proposed in literature that low mood (and thus risk-aversion) is due to being in a poor state wherein one can avoid energy expenditure (and thus worsening of situation) by assuming a low profile. Nettle plays the devil’s advocate and argues that an exactly opposite argument can be made that the organism in a poor state needs to indulge in high risk (and high energy) activities to get out of the poor state. Thus, there is no a prior reason as to why one explanation may be more sound than the other. To find out when exactly high risk behavior pay off and when exactly low risk behaviors are more optimal, he develops a model and uses some elementary mathematics to derive some conclusions. He, of course , bases his model on a Preventive focus, whereby the organism tries to minimize getting in a state R , which is sub-threshold. He allows the S(t) to be maximized under the constraint that one does not lose sight of R. I’ll not go into the mathematics, but the results are simple. When there is a lot of difference between R (dreaded state) and S (current state), then the organism adopts a risky behavioral profile. when the R and S are close, he maintains low risk behavior, however when he is in dire circumstances (R and S are very close) then risk proneness again rises to dramatic levels. To quote:
The model predicts that individuals in a good state will be prepared to take relatively large risks, but as their state deteriorates, the maximum riskiness of behaviour that they will choose declines until they become highly risk-averse. However, when their state becomes dire, there is a predicted abrupt shift towards being totally risk-prone. The switch to risk-proneness at the dire end of the state continuum is akin to that found near the point of starvation in the original optimal foraging model from which the current one is derived (Stephens, 1981). The graded shift towards greater preferred risk with improving state is novel to this model, and stems from the stipulation that if the probability of falling into the danger zone in the next time step is minimal, then the potential gain in S at the next time step should be maximised. However, a somewhat similar pattern of risk proneness in a very poor state, risk aversion in an intermediate state, and some risk proneness in a better state, is seen in an optimal-foraging model where the organism has not just to avoid the threshold of starvation, but also to try to attain the threshold of reproduction (McNamara et al., 1991). Thus, the qualitative pattern of results may emerge quite generally from models using different assumptions.
Nettle, then extrapolates the clinical significance from this by proposing that ‘agitated’ / ‘excited’ depression can be explained as when the organism is in dire straits and has thus become risk-prone. He also uses a similar logic for dysphoric mania although I don’t buy that. However, I agree that euphoric mania may just be the other extreme of high mood and more risk proneness and goal achievements; while depression the normal extreme of low mood and adverse circumstances and risk aversion. To me this model ties up certain things we know about life circumstances and the risk profile and mood tone of people and contributes to deepening our understanding. Nettle, D. (2009). An evolutionary model of low mood states Journal of Theoretical Biology, 257 (1), 100-103 DOI: 10.1016/j.jtbi.2008.10.033
In my last post I had hinted that bipolar mania and depression may both be characterized by an excessive and overactive self-regulatory focus: with promotion focus being related to Mania and prevention focus being related to depression. It is important to pause and note that the bipolar propensity is towards more self-referential goal-directed activity resulting in excessive use of self-regulatory focus. To clarify, I am sticking my neck out and claiming that depression is marked by an excessive obsession with self-oriented goal directed activities- but with a preventive focus thus focusing more on self’s responsibilities and duties , obligations etc with respect to other near and dear ones. Mania on the other hand, also has excessive self-oriented goal-directed focus, but the focus is promotional with obsession with hopes, aspirations etc, which are relatively more inward-focused and not too much dependent on significant others.
Thus, my characterization of depression as a state where regulatory reference is negative (one is focused on avoiding landing up in a negative end-state like being a burden on others), the regulatory anticipation is negative ( one anticipates pain as a result of almost any act one may perform and thus dreads day-to-day- activity) and the regulatory focus is negative (preventive focus whereby one is more concerned with duties and obligations to perform and security is a paramount need). The entire depressive syndrome can be summed up as an over activity of avoidance based mechanisms. However, please note that still there is an excess of self-referential/self-focused thinking and one is greatly motivated (although might be lacking energy) to bridge the differences between the real self and the ‘ought’ self. One can say that one’s whole life revolves around trying to become the ‘ought’ self, or rather one conceptualizes oneself in terms of the ‘ought’ self.
Contrast this with Mania, where the regulatory reference is positive (one is focused on achieving something grandiose ) , regulatory anticipation is positive (one feels in control and believes that only good things can happen to the self) and regulatory focus is positive (promotional focus whereby one is more concerned with hopes, aspirations etc and growth / actualization needs). Still, juts like in depression there is an excess of focus on self and one is greatly motivated (and also has the energy) to bridge the difference between the real and the ‘ideal’ self. One can say that one’s whole life revolves around trying to become the ‘ideal’ self , or rather one conceptualizes oneslef in terms of an ‘ideal’ self.
What can we predict from above: we know that brain’s default network is involved in self-focused thoughts and ruminations. We can predict, and know for a fact, that the default network is overactive in schizophrenics (and thus by extension in bipolars who I believe have the same underlying pathology, at least as far as psychotic spectrum is concerned)and thus we can say with confidence that indeed the regulatory focus should be high for bipolars and this should be correlated with default network activity. We can also predict that during the Manic phase, the promotion focus related neural network should be more active and in depressive phase the prevention-related areas of the brain should be more active. this last hypothesis still needs experimentation, but lets backtrack a bit and first look at the neural correlates of the promotion and preventive regulatory self-focus.
For this, I refer the readers to an , in my view, important study that tried to dissociate the medial PFC and PCC activity (both of which belong to the default network) while people engaged in self-reflection. Here is the abstract of the study:
Motivationally significant agendas guide perception, thought and behaviour, helping one to define a ‘self’ and to regulate interactions with the environment. To investigate neural correlates of thinking about such agendas, we asked participants to think about their hopes and aspirations (promotion focus) or their duties and obligations (prevention focus) during functional magnetic resonance imaging and compared these self-reflection conditions with a distraction condition in which participants thought about non-self-relevant items. Self-reflection resulted in greater activity than distraction in dorsomedial frontal/anterior cingulate cortex and posterior cingulate cortex/precuneus, consistent with previous findings of activity in these areas during self-relevant thought. For additional medial areas, we report new evidence of a double dissociation of function between medial prefrontal/anterior cingulate cortex, which showed relatively greater activity to thinking about hopes and aspirations, and posterior cingulate cortex/precuneus, which showed relatively greater activity to thinking about duties and obligations. One possibility is that activity in medial prefrontal cortex is associated with instrumental or agentic self-reflection, whereas posterior medial cortex is associated with experiential self-reflection. Another, not necessarily mutually exclusive, possibility is that medial prefrontal cortex is associated with a more inward-directed focus, while posterior cingulate is associated with a more outward-directed, social or contextual focus.
The authors then touch upon something similar to what I have said above, that one can be too much planful or goal-directed (bipolar propensity) , but it would still make sense to find whether the focus is promotional or preventive. To quote:
The idea of variation in individuals’ regulatory focus highlights the difference between agendas and traits; two people could both be described by the trait ‘planful’, but planful about what? A person with a predominantly promotion focus would be more likely to be planful about attaining positive rewards or outcomes, while a person with a predominantly prevention focus would be more likely to be planful about avoiding negative events or outcomes. Although a promotion or prevention focus may dominate, the aspects of the self that are active change dynamically across situations (e.g. Markus and Wurf, 1987), thus most individuals have both promotion and prevention agendas. For example, the same person can hold both the hope of becoming rich (a promotion agenda) and the duty to support an aging parent (a prevention agenda), or the aspiration to be a good citizen and the obligation to be a well-informed voter. As individuals, hopes and aspirations and duties and obligations make up a large part of our mental life and constitute the motivational scaffolding for much of our behaviour.
Now comes the study design:
The present studies investigated neural activity when participants were asked to think about self-relevant agendas related to either a promotion (think about your hopes and aspirations) or prevention (think about your duties and obligations) focus. We compared neural activity associated with thinking about these two different types of self-relevant agendas and with thinking about non-self-relevant topics (distraction). We expected greater activity in anterior and/or posterior medial regions associated with these two self-reflection conditions compared with the distraction control condition because thinking about one’s agendas, like thinking about one’s traits, is self-referential. Such a finding would also be consistent, for example, with Luu and Tucker’s (2004) proposal that both anterior cingulate and posterior cingulate cortex contribute to action regulation by representing goals and expectancies.
And this is what they found:
A double dissociation was found when participants were cued to think about promotion and prevention agendas on different trials for the first time during scanning (Experiment 2) and when they spent several minutes thinking about either promotion or prevention agendas before scanning (Experiment 1), indicating that it results from what participants are thinking about during the scan and not from some general effect (e.g. mood) carried over from the pre-scan period of self-reflection,
Here is what they discuss:
In short, the double dissociation between medial PFC and anterior/inferior medial posterior areas and our two self-reflection conditions indicates that these brain areas serve somewhat different functions during self-focus. There are a number of interesting possibilities that remain to be sorted out. Differential activity in these anterior medial and posterior medial regions as a function of the types of agendas participants were asked to think about could reflect: (i) differences in the representational content in the specific features of agendas, schemas, possible selves and so forth that constitute hopes and aspirations on the one hand and duties and obligations on the other (cf. Luu and Tucker, 2004); (ii) differences in the type(s) of component processes these agendas are likely to engage and/or the representational content they are likely to activate, for example, discovering new possibilities (hopes) vs retrieving episodic memories (e.g. Maddock et al., 2001) of past commitments (duties); (iii) differences in affective significance of hopes and aspirations (attaining the positive) and duties and obligations (avoiding the negative, Higgins, 1997; 1998); (iv) different aspects of the subjective experience of self, such as the subjective experience of control (an instrumental self) vs the subjective experience of awareness (an experiential self; Johnson, 1991; Johnson and Reeder, 1997; compare, e.g. Searle, 1992 and Weiskrantz, 1997, vs Shallice, 1978 and Umilta, 1988); (v) differences in the social significance of hopes and aspirations (more individual) and duties and obligations (involving others). This last possibility is suggested by findings linking the posterior cingulate with taking the perspective of another (Jackson et al., 2006). It may be that thinking about duties and obligations (a more outward focus) tends to involve more perspective-taking than does thinking about hopes and aspirations (a more inward focus). The greater number of mental/emotional references from the promotion group on the pre-scan essay and the tendency for a greater number of references to others from the prevention group are consistent with the hypothesis that medial PFC activity is associated with a more inward focus whereas posterior cingulate/precuneus activity is associated with a more outward, social focus. Clarifying the basis of the similarities and differences between neural activation associated with thinking about hopes and aspirations vs duties and obligations would begin to help differentiate the relative roles of brain regions in different types of self-reflective processing.
They do discuss clinical significance of their studies , but not in terms I would have loved to. I would like to see, whether there is state/trait hyperactivity and dissociation between the mPFC and PCC activation when the variable of depressive episode or manic episode subject is introduced. I’ll place my bets that there would be an interaction between the type of episode and the over activity in the corresponding default-brain regions; but would like to see that data collected.
So my thesis is that the self-reflective and focused default network is overactive in biploar/psychotic spectrum people, but a bias or tilt towards promotion or preventive focus leads to their recurring and periodic episdoes of mania and depression.
Lastly let me touch upon affect in these state and what Higgins had to say about this in his paper covered yesterday. Higgins proposed that bipolar is due to a promotional focus, with mania induced when there is not much mismatch (or awareness of mismatch) between the ideal and real self; while depression or sadness and melancholia induced when one becomes aware of the discrepancy between the ideal and the real self. He proposes that ‘ought’ and real self discrepancy leads to anxiety and nervousness/ agitation; while a preventive focus and congruency between ‘ought’ and real leads to calmness/quiescence.
I disagree with his formulations, in as much as I differentiate between a regulatory focus and the corresponding awareness of discrepancies in that direction. To Higgins they are the same; if someone has a promotional focus , he would also be more aware of the discrepancies between his ideal and real self and thus be saddened. I disagree. I believe that if one has a promotional focus one is driven by goals to make the resl self as close to the ideal self as possible and if one is not able to do so, one would use defense mechanisms to delude oneself , but will not admit to its reality, as the reality of incongruence along the focused dimension is too painful. However, because on is consciously focused on promotions, one would be aware of trade-offs and will acknowledge to himself that his ‘ought’ self, which anyway is not too important for his self-concept, is not congruent to the real self. Thus, one wit a predominant promotion focus may be painfully aware of the discrepancy between his ‘ought’ and real self and thus might be nervous, agitated/ irritable- all symptoms of Mania.
A depressive person on the other hand has a predominant preventive focus and all actions/ ruminations are driven by responsibilities and obligations. Here acknowledging to oneself that one has failed in meeting obligations may be catastrophic so one will try to delude oneself that one is closer to the ‘ought’ self than is the case. However, one may not require any defense mechanisms when judging the discrepancy between the ‘ideal’ and real self as that ‘ideal’ self is no longer a matter of life and death! One would be aware that one is not focusing too much on hopes and aspirations and thus feel despondent/ sad/ melancholic – again classical symptoms of depression. Yet, despite the affect of sadness, all rumination would be focused on ‘ought’ self and thus the content be of guilt, duties, burden, responsibilities, etc.
I’m sure there is some grain of truth in my formulation, but wont be able to state emphatically unless the above proposed dissociation study involving default region and bipolar people is done. If one of you decide to do that, do let me know the results, even if they contradict the thesis.
Johnson, M. (2006). Dissociating medial frontal and posterior cingulate activity during self-reflection Social Cognitive and Affective Neuroscience, 1 (1), 56-64 DOI: 10.1093/scan/nsl004 Higgins, E. T. (1997). Beyond pleasure and pain American Psychologist (52), 1280-1300
The hedonic principle says that we are motivated to approach pleasure and avoid pain. This, as per Higgins is too simplistic a formulation. He supplants this with his concepts of regulatory focus, regulatory anticipation and regulatory reference. That is too much of jargon for a single post, but let us see if we can make sense.
First, let us conceptualize a desired end-state that an organism wants to be in- say eating food and satisfying hunger. This desired end-state becomes the current goal of the organism and leads to gold-directed behavior. Now, it is proposed that given this desired end-state, the organism has two ways to go about achieving or moving towards the end-state. If the organism has promotion or achievement self-regulation focus, then it will be more sensitive to whether the positive outcome is achieved or not and will thus have an approach orientation whereby it would try to match his next state to the desired state or try approaching the desired end-sate as close as possible. On the other hand, if the organism has a prevention or safety self-regulation focus, then it will be more sensitive to the negative outcome as to whether it becomes worse off after the behavior and will have an avoidance orientation whereby it would try to minimize the mismatch between his next state and the desired state. Thus given n next states with different food availability , the person with promotion focus will choose a next state that is as close, say within a particular threshold, to the desired state of satiety ; while the person with the prevention focus will be driven by avoiding all the sates that have a sub-threshold food availability and are thus mis-matched with the end-goal of satiety. thus, the number and actual states which are available for choosing form are different for the two groups: the first set is derived from whether the states are within a particular range of the end-state; the second set is derived from excluding all the states that are not within a particular range of the end-state. Put this way it is easy to see, that these strategies of promotion or prevention focus, place different cognitive and computational demands: the former requires explortation/ maximizing, the other may be satisfied by satisficing. (see my earlier post on exploration/ exploitation and satisficers / maximisers where I believe I was slightly mistaken).
Now, that I have explained in simple terms (hopefully) the concepts of self-regulatory focus, let me quote from the article and show how Higgins arrives at the same.
The theory of self-regulatory focus begins by assuming that the hedonic principle should operate differently when serving fundamentally different needs, such as the distinct survival needs of nurturance (e.g., nourishment) and security (e.g., protection). Human survival requires adaptation to the surrounding environment, especially the social environment (see Buss, 1996). To obtain the nurturance and security that children need to survive, children must establish and maintain relationships with caretakers who provide them with nurturance and security by supporting, encouraging, protecting, and defending them (see Bowlby, 1969, 1973). To make these relationships work, children must learn how their appearance and behaviors influence caretakers’ responses to them (see Bowlby, 1969; Cooley, 1902/1964; Mead, 1934; Sullivan, 1953). As the hedonic principle suggests,children must learn how to behave in order to approach pleasure and avoid pain. But what is learned about regulating pleasure and pain can be different for nurturance and security needs. Regulatory-focus theory proposes that nurturance-related regulation and security-related regulation differ in regulatory focus. Nurturance-related regulation involves a promotion focus, whereas security related regulation involves a prevention focus. ….. People are motivated to approach desired end-states, which could be either promotion-focus aspirations and accomplishments or prevention-focus responsibilities and safety. But within this general approach toward desired end-states, regulatory focus can induce either approach or avoidance strategic inclinations. Because a promotion focus involves a sensitivity to positive outcomes (their presence and absence), an inclination to approach matches to desired end-states is the natural strategy for promotion self-regulation. In contrast, because a prevention focus involves a sensitivity to negative outcomes (their absence and presence), an inclination to avoid mismatches to desired end-states is the natural strategy for prevention self-regulation (see Higgins, Roney, Crowe, & Hymes, 1994).
Figure 1 (not shown here, go read the article for the figure) summarizes the different sets of psychological variables discussed thus far that have distinct relations to promotion focus and prevention focus (as well as some variables to be discussed later). On the input side (the left side of Figure 1), nurturance needs, strong ideals, and situations involving gain-nongain induce a promotion focus, whereas security needs, strong oughts, and situations involving nonloss-loss induce a prevention focus. On the output side (the right side of Figure 1), a promotion focus yields sensitivity to the presence or absence of positive outcomes and approach as strategic means, whereas a prevention focus yields sensitivity to the absence or presence of negative outcomes and avoidance as strategic means.
Higgins then goes on describing many experiments that support this differential regulations focus and how that is different from pleasure-pain valence based approaches. He also discusses the regulatory focus in terms of signal detection theory and here it is important to note that promotion focus leads to leaning towards (being biased towards) increasing Hits and reducing Misses ; while prevention focus means leaning more towards increasing correct rejections and reducing or minimizing false alarms. Thus,a promotion focus individual is driven by finding correct answers and minimizing errors of omission; while a preventive focused person is driven by avoiding incorrect answers and minimizing errors of commission. In Higgin’s words:
Individuals in a promotion focus, who are strategically inclined to approach matches to desired end-states, should be eager to attain advancement and gains. In contrast, individuals in a prevention focus, who are strategically inclined to avoid mismatches to desired end-states, should be vigilant to insure safety and nonlosses. One would expect this difference in self-regulatory state to be related to differences in strategic tendencies. In signal detection terms (e.g., Tanner & Swets, 1954; see also Trope & Liberman, 1996), individuals in a state of eagerness from a promotion focus should want, especially, to accomplish hits and to avoid errors of omission or misses (i.e., a loss of accomplishment). In contrast, individuals in a state of vigilance from a prevention focus should want, especially, to attain correct rejections and to avoid errors of commission or false alarms (i.e., making a mistake). Therefore, the strategic tendencies in a promotion focus should be to insure hits and insure against errors of omission, whereas in a prevention focus, they should be to insure correct rejections and insure against errors of commission .
He next discusses Expectancy x Value effects in utility research. Basically , whenever one tries to decide between two or more alternative actions/ outcomes, one tries to find the utility of a particular decision/ behavioral act based on both the value and expectance of the outcome. Value means how desirable or undesirable (i.e what value is attached) that outcome is to that person. Expectancy means how probable it is that the contemplated action (that one is deciding to do) would lead to the outcome. By way of an example: If I am hungry, I want to eat food. Lets say there are two actions or decisions that have different utility that can lead to my hunger reduction. The first involves begging for food from the shopkeeper; the second involves stealing the food from the shopkeeper. The first may be having positive value (begging might not be that embarrassing) , but low expectancy (the shopkeeper is miserly and unsympathetic) ; while the second act may have negative value (I believe that stealing is wrong and would like to avoid that act) but high expectancy (I am sure I’ll be able to steal the food and fulfill my hunger). the utility I impart to the two acts may determine what act I eventually decide to indulge in.
Higgins touches on research that showed that Expectancy X value have a multiplicative effect i.e as expectancy increases, and value increases the motivation to take that decision/ course of action increases non-linearly. He clarifies that this interaction effect is seen in promotion focus , but not in preventive focus:
Expectancy-value models of motivation assume not only that expectancy and value have an impact on goal commitment as independent variables but also that they combine multiplicatively (Lewin, Dembo, Festinger, & Sears, 1944; Tolman, 1955; Vroom, 1964; for a review, see Feather, 1982). The multiplicative assumption is that as either expectancy or value increases, the impact of the other variable on commitment increases. For example, it is assumed that the effect on goal commitment of higher likelihood of goal attainment is greater for goals of higher value. This assumption reflects the notion that the goal commitment involves a motivation to maximize the product of value and expectancy, as is evident in a positive interactive effect of value and expectancy. This maximization prediction is compatible with the hedonic or pleasure principle because it suggests that people are motivated to attain as much pleasure as possible. Despite the almost universal belief in the positive interactive effect of value and expectancy, not all studies have found this effect empirically (see Shah & Higgins, 1997b). Shah and Higgins proposed that differences in the regulatory focus of decision makers might underlie the inconsistent findings in the literature. They suggested that making a decision with a promotion focus is more likely to involve the motivation to maximize the product of value and expectancy. A promotion focus on goals as accomplishments should induce an approach-matches strategic inclination to pursue highly valued goals with the highest expected utility, which maximizes Value × Expectancy. Thus, the positive interactive effect of value and expectancy assumed by classic expectancy-value models should increase as promotion focus increases. But what about a prevention focus? A prevention focus on goals as security or safety should induce an avoid-mismatches strategic inclination to avoid all unnecessary risks by striving to meet only responsibilities that are clearly necessary. This strategic inclination creates a different interactive relation between value and expectancy. As the value of a prevention goal increases, the goal becomes a necessity, like the moral duties of the Ten Commandments or the safety of one’s child. When a goal becomes a necessity, one must do whatever one can to attain it, regardless of the ease or likelihood of goal attainment. That is, expectancy information becomes less relevant as a prevention goal becomes more like a necessity. With prevention goals, motivation would still generally increase when the likelihood of goal attainment is higher, but this increase would be smaller for high-value goals (i.e., necessities) than low-value goals. Thus, the second prediction was that the positive interactive effect of value and expectancy assumed by classic expectancy value models would not be found as prevention focus increased. Specifically, as prevention focus increases, the interactive effect of value and expectancy should be negative.
And that is exactly what they found! the paper touches on many other corroborating readers and the interested reader can go to the source for more. Here I will now focus on his concepts of regulatory expectancy and regulatory reference.
Regulatory Reference is the tendency to be either driven by positive and desired end-states as a reference end-point and a goal; or to be driven by negative and undesired end-states as goals that are most prominent. For example, eating food is a desirable end-state; while being eaten by others is a undesired end-sate. now an organism may be driven by the end-sate of ‘getting food’ and thus would be regulating approach behavior of how to go about getting food. It is important to contrast this with regulatory focus; while searching for food, it may have promotion orientation focusing on matching the end state; or may have prevention focus i.e avoiding states that don’t contain food; but it is still driven by a ‘positive’ or desired end-state. On the other hand, when the regulatory reference is a negative or undesirable end-state like ‘becoming food’, then avoidance behavior is regulated i.e. behavior is driven by avoiding the end-state. Thus, any state that keeps one away from ‘being eaten’ is the one that is desired; this may involve promotion focus as in approaching states that are opposite of the undesired state and provide safety from predator; or it may have a prevention focus as in avoiding states that can lead one closer to the undesired end-state. In words of Higgins:
Inspired by these latter models in particular, Carver and Scheier (1981, 1990) drew an especially clear distinction between self-regulatory systems that have positive versus negative reference values. A self-regulatory system with a positive reference value has a desired end state as the reference point. The system is discrepancy reducing and involves attempts to move one’s (represented) current self-state as close as possible to the desired end-state. In contrast, a self-regulatory system with a negative reference value has an undesired end-state as the reference point. This system is discrepancy-amplifying and involves attempts to move the current self-state as far away as possible from the undesired end-state.
To me Regulatory Reference is similar to Value associated with a utility decision and determines whether when we are choosing between different actions/ goals , the end-states or goals have a positive connotation or a negative connotation.
That brings us to Regulatory anticipation: that is the now well-known Desire/ dread functionality of dopamine mediated brain regions that are involved in anticipation of pleasure and pain and drive behavior. This anticipation of pleasure or pain is driven by our Expectancies of how our actions will yield the desired/undesired outcomes and can be treated as the equivalent to Expectancy in the Utility decisions. The combination of independent factors of regulatory reference and regulatory anticipation will drive what end-state or goal is activated to be the next target for the organism. Once activated, its tendencies towards promotion focus or prevention focus would determine how it strategically uses approach/ avoidance mechanisms to archive that goal or move towards the end-state. Let us also look at regulatory anticipation as described by higgins:
Freud (1920/1950) described motivation as a “hedonism of the future.” In Beyond the Pleasure Principle (Freud, 1920/1950), he postulated that people go beyond total control of the “id” that wants to maximize pleasure with immediate gratification to regulating as well in terms of the “ego” or reality principle that avoids punishments from norm violations. For Freud, then, behavior and other psychical activities were driven by anticipations of pleasure to be approached (wishes) and anticipations of pain to be avoided (fears). Lewin (1935) described how the “prospect” of reward or punishment is involved in children learning to produce or suppress, respectively, certain specific behaviors (see also Rotter, 1954). In the area of animal learning, Mowrer (1960) proposed that the fundamental principle underlying motivated learning was regulatory anticipation, specifically, approaching hoped-for desired end-states and avoiding feared undesired endstates. Atkinson’s (1964) personality model of achievement motivation also proposed a basic distinction between self-regulation in relation to “hope of success” versus “fear of failure.” Wicker, Wiehe, Hagen, and Brown (1994) extended this notion by suggesting that approaching a goal because one anticipates positive affect from attaining it should be distinguished from approaching a goal because one anticipates negative affect from not attaining it. In cognitive psychology, Kahneman and Tversky’s (1979) “prospect theory” distinguishes between mentally considering the possibility of experiencing pleasure (gains) versus the possibility of experiencing pain (losses).
Why I have been dwelling on this and how this fits into the larger framework: Wait for the next post, but the hint is that I believe that bipolar mania as well as depression is driven by too much goal-oriented activity- in mania the focus being promotion; while in depression the focus being preventive; Higgins does discuss mania and depression in his article, but my views differ and would require a new and separate blog post. Stay tuned!
Higgins, E. T. (1997). Beyond pleasure and pain American Psychologist (52), 1280-1300
There is a new review article in CMAJ about the neurobiology of depression. And then there is the multi-part series on depression over at Neurotopia by the excellent Sci.
So I thought I’ll link these for the benefit of my readers. While it may sound an oxymoron to do a review of a review, let me briefly summarize the review article.
The article lists three important contributing factors for depression. The first is genetics; the second childhood stress and the third ongoing or recent psychosocial stress. And of course different neurobiological mechanisms underlie all three factors.
To take by way of example, we have the famed monoamine theory of depression whereby low baseline serotonin (and norepenipherine) levels in the brain are held responsible for depressive symptoms. This hypothesis derives most evidence from the effects of anti-depressants on the brain. Now depression also has a genetic heritable component (this is apparent from twin studies); some of the heritability of depression can be explained by polymorphisms of various genes affecting the serotonin system, primary among them being the gene affecting Serotonin Transporter or SERT. thus, the underlying serotonin system can be treated as one biological system that has a strong genetic component.
To take by way of second example, consider the hypothalamic-pituitary-adrenal axis that is involved in response to stress. This system is abnormally developed if the child is exposed to stress in a critical developmental window. Experiments with rats and monkeys confirm that abnormal and stressful environment during early childhood, leads to abnormal functioning of this axis, that later pre-disposes to depression. thus, this HPA axis may be taken as a proxy for the component that is due to development and epigenetics.
To take by way of third example, consider the Brain Derived Neurotropic factor in the brain. This BDNF is responsible for survival of new neurons and for new synapse formation (synaptic plasticity) during adulthood; new neurons and new synapses help us to learn (by neurogenisis in the hippocampus), especially when the environment is stressful; now there are two polymorphisms of the gene coding for BDNF; the ‘MET’ allele cause reduced hippocampal volume at birth, hypoactivity in resting state in hippocampus, increased metabolism in hippocampus while learning and relatively poor hiipocampal dependent memory-function. From all this it is apparent that MET allele somehow leads to less synthesis of BDNF and thus low learning in hippocampus as a result of reduced neurogenesis / synaptogenesis. Now, the same MET allele also raises the risk of depressionand the mediating factor is the stress responsivity of the individual. Thus, the BDNF may mediate the sensitivity of a person to the same external psychosocial stress and might be very crucial via the gene-environment interaction effects. Also prolonged stress, which may result in prolonged BDNF secretions and thus lead to toxicity and opposite paradoxical effects may be another putative mechanism linkibng stress exposure in adulthood to underlying pathophysiology of reduced neurogenesis.
The above may seem too simplistic but it points us in the right direction- some neurobiological systems like the serotonin system may be largely genetic in nature and our treatment approaches based around this fact. Others like the HPA axis malfunctioning may be entirely environmental in their origin, and maybe preventive interventions like ensuring stress free childhood for all, should be the policy focus here. Depending on the plasticity of later HPA axis, therapy or medications may be the treatment options. Finally, other neurobiological systems involved, like the BDNF and stress sensitivity/over-exposure, may display complex gene-environment interactions and again knowing the nature of these systems will help us counter the symptoms using a combination of CBT/ medication.
Depression is definitely a much complex disorder to be completely understood on the basis of a single review article, or even a series of blog posts, , but the underlying neurobiological mechanisms and systems clearly indicate how genetics, environment (especially during critical developmental window) and and epigenetics (gene-environment interactions) are involved in its etiology and how different interventions and treatments taking these into account have to be developed.
aan het Rot, M., Mathew, S., & Charney, D. (2009). Neurobiological mechanisms in major depressive disorder Canadian Medical Association Journal, 180 (3), 305-313 DOI: 10.1503/cmaj.080697
I normally do not like to thrash articles or opinion pieces, but this article by Michael Shermer, in the Scientific American, has to be dealt with as it as masquerading as an authoritative debunking by one of the foremost skeptics in one of the most respected magazines. Yet, it is low on science and facts and is more towards opinions, biases and prejudices.
Shermer, from the article seems to be generally antagonistic to stage theories as he thinks they are mere narratives and not science. The method he goes about discrediting stage theories is to lump all of them together (from Freud’s’ theories to Kohlberg’s theories), and then by picking up on one of them (the stages of grief theory by Kubler-Ross) he tries to discredit them all. This is a little surprising. While I too believe (and it is one of the prime themes of this blog) that most of the stage theories have something in common and follow a general pattern, yet I would be reluctant to club developmental stage theories that usually involve stages while the child is growing; to other stage theories like stages of grief, in which no physical development is concurrent with the actual stage process, but the stages are in adults that have faced a particular situation and are trying to cope with that situation. In the former case the children are definitely growing and their brains are maturing and their is a very real substrate that could give rise to distinctive stages; in the latter case the stages may not be tied so much to development of the neural issue; as much they are to its plasticity; the question in latter case would be viz does the brain adapt to losses like a catastrophic news, death of loved one etc by reorganizing a bit and does the reorganizing happen in phases or stages. The two issues of childhood development and adult plasticity are related , but may be different too. With adult neurogenisis now becoming prominent I wont be surprised if we find neural mechanisms for some of these adult stages too, like the stages of grief, but I would still keep the issues different.
Second , assuming that Shermer is right and that at the least the stage theory of grief, as proposed by Kubler-Ross is incorrect; and also that it can be clubbed with other stage theories; would it be proper to conclude that all stage theories were incorrect based on the fact that one of them was incorrect/ false. It would be like that someone proposed a modular architecture of mind; and different modules for mind were proposed accordingly; but on of the proposed modules did not stood the scrutiny of time( lets say a module for golf-playing was not found in the brain); does that say that all theories that say that the brain is organized modularity for at least some functions are wrong and all other modules are proved non-existent. Maybe the grief stages theory is wrong, but how can one generalize from that to all developmental stage theories, many of them which have been validated extensively (like Paiget’s theories) and go on a general rant against all things ‘stages’!!
Next let me address another fallacy that Shermer commits; the causal analogy fallacy: that if two things are analogous than one thing is causing other , when in fact no directional inference can be drawn from the analogical space. He asserts that humans are pattern-seeking, story-telling primates who like to explain away there experiences with stories or narratives especially as it provides a structure over unpredictable and chaotic happenings. Now, I am all with Shermer up till this point and this has been my thesis too; but then he takes a leap here and says that this is the reason we come up with stage theories. Why ‘stage’ theories? Why not just theories? any theory, in as much as it is an attempt to provide a framework for understanding and explication is a potential narrative and perhaps anyone that tries to come up with a theory is guilty of story-telling by extension. The leap he is making here, is the assumption that story-telling is a ‘stage’ process and a typical story follows a pattern, which is, unfolding of plot in distinct stages.
Now, I agree with the leap too that Shermer is making- a narrative is not just any continuous thread of yarn that the author spins- it normally involves discrete stages and though I have not touched on this before, Christopher Brooks work that delineated the eight basic story plots also deals with the five -stage unfolding of plot in all the different basic story plots. so I am not contesting the fact that story-telling is basically a stage process with distinct stages through which the protagonist pass or distinct stages of plot development; what I am contesting is the direction of causality. Is it because we have evidence of distinct stages in the lives of individuals, and in general, evidence for the eight-fold or the five-fold stages of development of various faculties, that our stories reflect distinct stages as they unfold and the mono myth has a distinct stage structure; or is it because our stories have structures in the form of stages, that the theories we develop also have stages? I believe that some theorizing in terms of stages may indeed be driven by our desire to compartmentalize everything into eight or so basic stages and environmental adaptive problems we have encountered repeatedly and which have become part of our mythical narrative structure; but most parsimoniously or mythical narrative structure is stage bound, as we have observed regularities in our development and life that can only be explained by resorting to discrete stages rather than a concept to continuous incremental improvement/ development/ unfolding.
Before moving on, let me just give a brief example of the power of stage theories and how they can be traced to neural mechanisms. I’ll be jumping from the very macro phenomenon I have been talking about to the very micro phenomenon of perception. One can consider the visuomotor development of a child. Early in life there is a stage when the oculomotor control is mostly due to sub cortical regions like superior colliculus and the higher cortical regions are not much involved (they are not sufficiently developed/ myelinated) . The retina of eye is such that the foveal region is underdeveloped; and all this combination means that infants are very good at orienting their eyes to moving targets in their peripheral vision, but are poor at colour and form discrimination. Also, they can perform saccades first, the capability to make antisaccades develops next and the capacity to make smooth pursuit movement comes later. There are distinct stages of oculomotor control that a child can move through and this would definitely affect its perception of the world. (for example on can recognize an disicrimintae based on form first and color later as the visual striated areas for these mature in that order. In sort, there are strong anatomical, physiological and psychological substrates for most of the developmental stage theories.
Now let me address, why Shermer, whom I normally admire, has taken this perverse position. It is because his Skeptic magazine recently published an article by Russell P. Friedman, executive director of the Grief Recovery Institute in Sherman Oaks, Calif. (www.grief-recovery.com), and John W. James, of The Grief Recovery Handbook (HarperCollins, 1998), which tried to debunk an article published by JAMA that found support for the five stage grief theory. Now, that Skeptic article had received a well-deserved thrashing by some reputed blogs, see this world Of Psychology post that exposes many of the holes in Friedman and James’ argument, so possibly out of desperation Shermer though why not settle the scores and expose all stage theories as pseudoscience. Unfortunately he fails miserably in defending his publication and we have seen above why! Now, let us come to the meat of the controversy: the stages of grief theory of Kubler-Ross for which the Yale group found evidence and which the Skeptics didn’t like and found the evidence worth criticizing. I have read both the original JAMA paper and the skeptic article and see some merits to both side. In fact I guess the stance that Friedman et al have taken I even agree with to an extent, especially their decoupling of stages of grief from stages of dying person/ stages of adjustment to catastrophic death. Some excerpts:
IN 1969 THE PSYCHIATRIST ELIZABETH KÜBLER-ROSS wrote one of the most influential books in the history of psychology, On Death and Dying. It exposed the heartless treatment of terminally-ill patients prevalent at the time. On the positive side, it altered the care and treatment of dying people. On the negative side, it postulated the now-infamous five stages of dying—Denial, Anger, Bargaining, Depression, and Acceptance (DABDA), so annealed in culture that most people can recite them by heart. The stages allegedly represent what a dying person might experience upon learning he or she had a terminal illness. “Might” is the operative word, because Kübler-Ross repeatedly stipulated that a dying person might not go through all five stages, nor would they necessarily go through them in sequence. It would be reasonable to ask: if these conditions are this arbitrary, can they truly be called stages?
Many people have contested the validity of the stages of dying, but here we are more concerned with the supposed stages of grief which derived from the stages of dying.
During the 1970s, the DABDA model of stages of dying morphed into stages of grief, mostly because of their prominence in college-level sociology and psychology courses. The fact that Kübler-Ross’ theory of stages was specific to dying became obscured.
Prior to publication of her famous book, Kübler-Ross hypothesized the Five Stages of Receiving Catastrophic News, but in the text she renamed them the Five Stages of Dying or Five Stages of Death. That led to the later, improper shift to stages of grief. Had she stuck with the phrase catastrophic news, perhaps the mythology of stages wouldn’t have emerged and grievers wouldn’t be encouraged to try to fit their emotions into non-existent stages.
I wholeheartedly concur with the authors that it is not good to confuse stages that a dying person may go through on receiving catastrophic death of terminal illness, with grief stages that may follow once one has learned of a loss and is coping with the loss(death of someone, divorce of parents etc); in the first case the event that is of concern is in the future and would lead to different tactics, than for the latter case when the event is already in the past and has occurred. thus, as rightly pointed by the authors, denial may make sense for dying people – ‘the diagnosis is incorrect, I am not going to die; I have no serious disease.’; denial may not make sense for a loos of a loved one by death, as the vent has already happened and only a very disturbed and unable to cope person would deny the factuality of the event (death). but this is a lame point; in grief (equated with loss of loved one), they stage can be rightly characterized as disbelief/dissociation/isolation, whereby one would actively avoid all thoughts of the loved one’s non-existence and come up with feelings like ‘I still cannot believe that my mother is no longer alive’ . Similarly My personal view is that while anger and energetic searching of alternatives may be the second stage response to catastrophic prospective forecast; the second stage response to a catastrophic news (news of a loss of loved one) would be more characterized by energized yearning for the lost one and an anger towards the unavoidable circumstances and the world in general that led to the loss.
The third stage is particularly problematic; in dying people it makes perfect sense to negotiate and bargain, as the event has not really happened (‘I’ll stop sinning, take away the cancer); but as rightly pointed out by the authors it doesn’t make sense for events that have already happened.while many authoritative people have substituted yearning for the third stage in case of grief , I would propose that we replace that with regret or guilt. I know this would be controversial; but the idea is a bargaining of past events like ‘God, please why didn’t you take my life, instead of my young son’ ; it doesn’t make sense but is a normal stage of grieving – looking for and desiring alternative bad outcomes (‘I wish I was dead instead of him’. The other two stages depression and acceptance do not pose as much problems, so I’ll leave them for now. suffice it to say that becoming depressed / disorganized and then recovering/ becoming reorganized are normal stages that one would be expected to go through.
What I would now return is to their criticism of Kubler-Ross. They first attack her saying her evidence was anecdotal and based on personal feelings then , instead of correcting this gross error and themselves providing statistical and methodological research results, present anecdotal evidence based on their helping thousands of grieving persons.
Second they claim, that this stage based theories cause much harm; but I am not able to understand why a stage based theory must cause harm and , for all their good intentions, I think they are seriously confused here. On the one hand they claim (for eg in depression section) that stages lead to complacency:
It is normal for grievers to experience a lowered level of emotional and physical energy, which is neither clinical depression nor a stage. But when people believe depression is a stage that defines their sad feelings, they become trapped by the belief that after the passage of some time the stage will magically end. While waiting for the depression to lift, they take no actions that might help them.
and on the other hand they claim that labeling something causes over reactivity and over treatment:
When medical or psychological professionals hear grievers diagnose themselves as depressed, they often reflexively confirm that diagnosis and prescribe treatment with psychotropic drugs. The pharmaceutical companies which manufacture those drugs have a vested interest in sustaining the idea that grief-related depression is clinical, so their marketing supports the continuation of that belief. The question of drug treatment for grief was addressed in the National Comorbidity Survey published in the Archives of General Psychiatry,Vol. 64, April, 2007). “Criteria For Depression Are Too Broad Researchers Say—Guidelines May Encompass Many Who Are Just Sad.” That headline trumpeted the survey’s results, which observed more than 8,000 subjects and revealed that as many as 25% of grieving people diagnosed as depressed and placed on antidepressant drugs, are not clinically depressed. The study indicated they would benefit far more from supportive therapies that could keep them from developing full-blown depression.
Now, I am not clear what the problem is – is it complacency or too much concerns and over-treatment. And this argument they keep on repeating and hammering down – that stages do harm as them make people complacent that thing swill get better on its own and no treatment is needed. I don’t think that is a valid assumption, we all know that many things like language develop, but their are critical times hen interventions are necessary for proper language to develop; so too is the case with grieving people, they would eventually recover, but they do need support of friends and family and all interventions, despite this being ‘just a phase’. I don’t think saying that someone would statistically go away in a certain time-period eases the effects one if feeling of the phenomenon right now. An analogy may help. It is statistically true, that on an average, within six months a person would get over his most recent breakup and start perhaps flirting again; that doesn’t subtract from the hopelessness and feelings of futility he feels on teh days just following the breakup and most of the friends and family do provide support even though they know that this phase will get over. Same is true for other stages like stages of grief and the concerns of authors are ill-founded.
The concerns of the author that I did feel sympathetic too though was the stage concept being overused in therapy and feelings like guilt being inadvertently implanted in the clients by the therapists.
Grieving parents who have had a troubled child commit suicide after years of therapy and drug and alcohol rehab, are often told, “You shouldn’t feel guilty, you did everything possible.” The problem is that they weren’t feeling guilty, they were probably feeling devastated and overwhelmed, among other feelings. Planting the word guilt on them, like planting any of the stage words, induces them to feel what others suggest. Tragically, those ideas keep them stuck and limit their access to more helpful ideas about dealing with their broken hearts.
Therapists have to be really careful here and not be guided by pre-existing notions of how the patient is feeling. they should listen to the client and when in doubt ask questions, not implicitly suggest and assume things. That indeed is a real danger.
Lastly the criticism of stages/ common traits vs individual differences and uniqueness have to be dealt with. the claim that each grieves uniquely is not a novel claim and I do not find it lacking in evidence too. It is tautological. But still some common patterns can be elucidated and subsumed under stages. These stages are the ‘normal’ stages with enough room for individual aberration . I think there has to be more tolerance and acceptance of the ‘abnormal’ in general – if someone directly accepts and never feels and denial he too is abnormal – but one we readily accept as a resilient persons; the other who gets stuch at denial has to be shown greater care and hand-holded through the remaining stages to come to acceptance.
In the end I would like to briefly touch on the Yale study that reignited this controversy. Here is the summary of An Empirical Examination of the Stage Theory of Grief by Paul K. Maciejewski, PhD; Baohui Zhang, MS; Susan D. Block, MD; Holly G. Prigerson, PhD.
Context The stage theory of grief remains a widely accepted model of bereavement adjustment still taught in medical schools, espoused by physicians, and applied in diverse contexts. Nevertheless, the stage theory of grief has previously not been tested empirically.
Objective To examine the relative magnitudes and patterns of change over time postloss of 5 grief indicators for consistency with the stage theory of grief.
Design, Setting, and Participants Longitudinal cohort study (Yale Bereavement Study) of 233 bereaved individuals living in Connecticut, with data collected between January 2000 and January 2003.
Main Outcome Measures Five rater-administered items assessing disbelief, yearning, anger, depression, and acceptance of the death from 1 to 24 months postloss.
Results Counter to stage theory, disbelief was not the initial, dominant grief indicator. Acceptance was the most frequently endorsed item and yearning was the dominant negative grief indicator from 1 to 24 months postloss. In models that take into account the rise and fall of psychological responses, once rescaled, disbelief decreased from an initial high at 1 month postloss, yearning peaked at 4 months postloss, anger peaked at 5 months postloss, and depression peaked at 6 months postloss. Acceptance increased throughout the study observation period. The 5 grief indicators achieved their respective maximum values in the sequence (disbelief, yearning, anger, depression, and acceptance) predicted by the stage theory of grief.
Conclusions Identification of the normal stages of grief following a death from natural causes enhances understanding of how the average person cognitively and emotionally processes the loss of a family member. Given that the negative grief indicators all peak within approximately 6 months postloss, those who score high on these indicators beyond 6 months postloss might benefit from further evaluation.
I believe they have been very honest with their data and analysis. They found peak of denial, yearning, anger , depression and acceptance in that order. I belie they could have clubbed together anger and yearning together as the second stage as this study dealt with stages of grief and not stages of dying and should have introduced a new measure of regret/guilt and I predict that this new factors peak would be between the anger/yearning peak and depression peak.
Thus, to summarize, my own theory of grief and dying (in eth eight basic adaptive problems framework) are :
Stage theory of dying (same as Kubler-Ross):
Denial: avoiding the predator; as the predator (death ) cannot be avoided , it is denied!!
Anger/ Searching: Searching for resources; an energetic (and thus partly angry)efforts to find a solution to this over looming death; belief in pseudo-remedies etc.
Bargaining/ negotiating: forming alliances and friendships: making a pact with the devil…or the God …that just spare me this time and I will do whatever you want in future.
Depression: parental investment/ bearing kids analogy: is it worth living/ bringing more people into this world?
Acceptance: helping kin analogy: The humanity is myself. even if I die, I live via others.
Stage theory of grief (any loss especially loss of a loved one)
Disbelief: Avoiding the predator (loss) . I cant believe the loss happened. Let me not think about it.
Anger/ Yearning: Energetic search for resources (reasons) . Why did it happen to me; can the memories and yearning substitute for the loved one?
Bargaining/ regret/ guilt: forming alliances and friendships: Could this catastrophe be exchanged for another? could I have died instead of him?
Depression: parental investment/ bearing kids analogy : is it worth living/ bringing more people into this world?
Acceptance: helping kin analogy: Maybe I can substitute the lost one with other significant others? Maybe I should be thankful that other significant persons are still there and only one loss has occurred.
Do let me know your thoughts on this issue. I obviously being a researcher in the stages paradigm was infuriated seeing the Shermer article,; others may have more balanced views. do let me know via comments, email!!
Paul K. Maciejewski, PhD; Baohui Zhang, MS; Susan D. Block, MD; Holly G. Prigerson, PhD (2007). An Empirical Examination of the Stage Theory of Grief JAMA, 297 (7), 716-723
The Institute Of Psychiatry, London conducts Madusley debates on relevant psychiatric topics between distinguished psychiatrists and neuroscientists and also publishes them as a podcast. The most recent such debate consisted of the issue of whether anti-depressants are any better than Placebos in treating depression. There were knowledgeable arguments on both fronts and no matter what position you hold, hearing the debate would definitely enhance your knowledge about the issues involved.
I, for one, did not knew that anti-depressants worked by addressing the automatic and unconscious attention/ perception and memory biases. While I was aware that CBT worked top-down and affected cognitive biases and brain regions different from that areas affected by anti-depressants that presumably worked on neurotransmitter levels and bottom-up, the revelation that Goodwin’s team had found that anti-depressants too work on biases, but unconscious ones, while CBT works on conscious ones, was new and enriching.
On the other hand I agree with many of the methodological issues raised by the speakers who claimed that anti-depressants were no good than placebos : the fact that the results lack ‘clinical significance’; being psycho-active they are bound to have some effects and also the fact that the relief may be symptomatic due to ‘drug’ nature of anti-depressants and not specific and addressing the underlying disease, that the scale (HRSD) measuring depression may be not reflective of DSM criterion and may not be the best measure of disease severity; and I concur, but still think that the current generation of anti-depressants (other medicines) must be some good (over and above the good they bring by way of Placebo effect) especially since research has shown how they work (with a lag of few weeks before showing effects and by primarily inducing neurogenesis and affecting discrete brain areas) and how they are indeed effective at least in severely depressed people. Still all this should be taken with a pinch of salt- we have continuously been replacing outdated models of depression (like serotonin deficiency) by more and more accurate models (like neurogenesis). In my view we need to persist in that direction, though also having a healthy skepticism of what the drug companies might say and market new drugs and models for. Fortunately there are a host of unbiased pharmacists, neuroscientist and psychiatrist out there who are struggling with finding the most accurate model and the most accurate medication/ treatment like CBT for the same; so we don’t need to despair. However, to blindly accepted all drugs (and models) , marketed by the Big Pharma, at their face value, and in clear evidence that they have not been proved effective beyond doubt, in clear evidence that negative finding have not been reported diligently and in view of the fact that at many time side -effects are glossed over, I would request not to be seduced overtly by the anti-depressants efficacy hype, but to moderate that with other known efficacious manes like exercise, CBT and yoga (all of which may be working by placebo effect themselves, but which definitely have lesser or no side-effects than anti-depressants. this of course does’nt mean that you give up your medicenes- at least not without consulting your psychiatrist- but supplementing them with other non-drug measures and reducing your reliance on the drugs- as they definitely have side-effects and may not be that efficacious as depicted in advertisements/ popular press.
Here is the summary of the talk from the IoP website:
Inspired by the recent media-frenzy at Prof Irving Kirsch?s research which suggested that antidepressants are no better than placebo, this Maudsley debate had an extremely good turnout.
Professor Kirsch gave us a run through of his research, in which he claimed to have found that there was a statistically significant benefit in the use of SSRIs over placebo – but that the difference was smaller than the standard of ?clinical significance? set down by the UK?s National Institute for Clinical Excellence (NICE) for all but the most depressed patients. His team also found that patients? response to placebo across all the trials was ?exceptionally large? – an indication of the complexity of the disorder. It was only the fact that the most severely depressed patients showed a much lower response to placebo that made the drug response clinically significant in this group of patients.
Against the motion, Professor Guy Goodwin argued that there were crucial flaws to the bounds that Kirsch had used to define clinical effectiveness. He pointed out that these criteria fail to contain an accurate description of depression, for example that they fail to mention persistent negative thoughts and other crucial symptoms that would be included in DSM IV.
For the motion, Dr Joanna Moncrieff alluded to the idea that there may be some sort of conspiracy of complacency and wishful thinking within the psychiatric profession as to the effectiveness of anti-depressants.
An impassioned speech against the motion was then given by Prof Lewis Wolpert. This was inspired by his own experiences of depression, which proved a powerful persuader as to the place that anti-depressants have in the treatment of severe depression.
Prior to the debate the audience were asked to vote which side of the argument they favoured. The leaning was overwhelmingly against the motion, perhaps not surprising in a room full of psychiatrists! After the speakers had made their points votes were recounted and a minority had changed their minds and had been swayed to support the motion. However those against the motion still had the majority.
The original article that sparked this debate is available online at PLOS Medicine, and I’m including the editor’s summary below:
Everyone feels miserable occasionally. But for some people—those with depression—these sad feelings last for months or years and interfere with daily life. Depression is a serious medical illness caused by imbalances in the brain chemicals that regulate mood. It affects one in six people at some time during their life, making them feel hopeless, worthless, unmotivated, even suicidal. Doctors measure the severity of depression using the “Hamilton Rating Scale of Depression” (HRSD), a 17–21 item questionnaire. The answers to each question are given a score and a total score for the questionnaire of more than 18 indicates severe depression. Mild depression is often treated with psychotherapy or talk therapy (for example, cognitive–behavioral therapy helps people to change negative ways of thinking and behaving). For more severe depression, current treatment is usually a combination of psychotherapy and an antidepressant drug, which is hypothesized to normalize the brain chemicals that affect mood. Antidepressants include “tricyclics,” “monoamine oxidases,” and “selective serotonin reuptake inhibitors” (SSRIs). SSRIs are the newest antidepressants and include fluoxetine, venlafaxine, nefazodone, and paroxetine.
Why Was This Study Done?
Although the US Food and Drug Administration (FDA), the UK National Institute for Health and Clinical Excellence (NICE), and other licensing authorities have approved SSRIs for the treatment of depression, some doubts remain about their clinical efficacy. Before an antidepressant is approved for use in patients, it must undergo clinical trials that compare its ability to improve the HRSD scores of patients with that of a placebo, a dummy tablet that contains no drug. Each individual trial provides some information about the new drug’s effectiveness but additional information can be gained by combining the results of all the trials in a “meta-analysis,” a statistical method for combining the results of many studies. A previously published meta-analysis of the published and unpublished trials on SSRIs submitted to the FDA during licensing has indicated that these drugs have only a marginal clinical benefit. On average, the SSRIs improved the HRSD score of patients by 1.8 points more than the placebo, whereas NICE has defined a significant clinical benefit for antidepressants as a drug–placebo difference in the improvement of the HRSD score of 3 points. However, average improvement scores may obscure beneficial effects between different groups of patient, so in the meta-analysis in this paper, the researchers investigated whether the baseline severity of depression affects antidepressant efficacy.
What Did the Researchers Do and Find?
The researchers obtained data on all the clinical trials submitted to the FDA for the licensing of fluoxetine, venlafaxine, nefazodone, and paroxetine. They then used meta-analytic techniques to investigate whether the initial severity of depression affected the HRSD improvement scores for the drug and placebo groups in these trials. They confirmed first that the overall effect of these new generation of antidepressants was below the recommended criteria for clinical significance. Then they showed that there was virtually no difference in the improvement scores for drug and placebo in patients with moderate depression and only a small and clinically insignificant difference among patients with very severe depression. The difference in improvement between the antidepressant and placebo reached clinical significance, however, in patients with initial HRSD scores of more than 28—that is, in the most severely depressed patients. Additional analyses indicated that the apparent clinical effectiveness of the antidepressants among these most severely depressed patients reflected a decreased responsiveness to placebo rather than an increased responsiveness to antidepressants.
What Do These Findings Mean?
These findings suggest that, compared with placebo, the new-generation antidepressants do not produce clinically significant improvements in depression in patients who initially have moderate or even very severe depression, but show significant effects only in the most severely depressed patients. The findings also show that the effect for these patients seems to be due to decreased responsiveness to placebo, rather than increased responsiveness to medication. Given these results, the researchers conclude that there is little reason to prescribe new-generation antidepressant medications to any but the most severely depressed patients unless alternative treatments have been ineffective. In addition, the finding that extremely depressed patients are less responsive to placebo than less severely depressed patients but have similar responses to antidepressants is a potentially important insight into how patients with depression respond to antidepressants and placebos that should be investigated further.
Irving Kirsch, Brett J. Deacon, Tania B. Huedo-Medina, Alan Scoboria, Thomas J. Moore, Blair T. Johnson (2008). Initial Severity and Antidepressant Benefits: A Meta-Analysis of Data Submitted to the Food and Drug Administration PLoS Medicine, 5 (2) DOI: 10.1371/journal.pmed.0050045
A recent article in Science Magazine relates Magical thinking to feelings of control. It is an interesting paper and here is the abstract:
We present six experiments that tested whether lacking control increases illusory pattern perception,which we define as the identification of a coherent and meaningful interrelationship among a set of random or unrelated stimuli. Participants who lacked control were more likely to perceive a variety of illusory patterns, including seeing images in noise, forming illusory correlations in stock market information, perceiving conspiracies, and developing superstitions. Additionally, we demonstrated that increased pattern perception has a motivational basis by measuring the need for structure directly and showing that the causal link between lack of control and illusory pattern perception is reduced by affirming the self. Although these many disparate forms of pattern perception are typically discussed as separate phenomena, the current results suggest that there is a common motive underlying them.
To me, it is exciting that Magical thinking and feelings of control are linked together. It is my thesis that Manic episodes and frank psychosis are marked by presence of Magical Thinking to a large and non-adaptive degree. Sometimes severe depression too causes Psychosis and I presume that Magical thinking in that case too may be increased. If so, one of the frameworks for understanding depression is that of learned helplessness paradigm , whereby mice are exposed to uncontrollable shocks and then do not even try to avoid the shocks , even after the external environment has changed and they could now possibly avoid them by correct behaviour. One explanation for psychosis in severe depression may be that feelings of lack of control rise to such a level that one starts indulging in Magical thinking and starts creating and seeing patterns that are not there and thus loosing touch with Reality.
This raises another question of whether Manic psychosis may itself be due to the same stress and feelings of non-control, but this time not leading to Depression but Mania. We all know that bipolarity is a stress-diatheisis model and maybe whenever stress causes feelings of lack of control the bipolar people have a tendency to exaggerated magical thinking: When mood is good this may lead to Manic psychosis; while when mood is low the same magical thinking may lead to depressive psychosis. Does anyone know any literature on bipolar people being more magical thinkers? does the same reason also work well for them and endow them with creativity? Another related question would be whether bipolar people have more feelings of being out of control? And what about self-esteem, do those in Mania , who get psychosis, also suffer from lack of self-esteem and this is mediated by the role of self-esteem in protecting against magical thinking?
The emerald Edition of Encephalon is just out at the Neuroscientifically challenged and Marc does a good job of bringing to light some of the most interesting and fascinating posts on brain from the last two weeks. A few that I found immediately drawn to were Greg Downey’s critical appraisal of the neuroplasticity popular press misconceptions and he does a pretty good job of that while simultaneously arousing interest in neuroplasticity in general and Doidge’s book in particular. another goo done is the growing recognition that antidepressant can temporarily increase suicide risk and that anti-psychotics may be a novel treatment for reducing suicide risk as they help control impulsivity. To me dopamine is related to impulsivity and anti-psychotics seem a better bet than anti-depressants when targeting suicide as most suicide is due to high impulsivity. There are many more gems, so go have a look.