sandygautam

(34 comments, 475 posts)

Sandeep Gautam is a psychology and cognitive neuroscience enthusiast, whose basic grounding is in computer science.

Yahoo Messenger: sandygautam

Jabber/GTalk: sandygautam17

Posts by sandygautam

Low Latent Inhibition, high faith in intuition and psychosis/creativity

Well, the cluster goes together. Previous research has found that Low LI and psychosis (schizophrenia) and creativity are related; previous research has also found that psychotic /some types of creative people have more faith in intuition; and this research ties things by showing that Low LI and high faith in intuition are correlated.

The research under question is by Kaufman and in it he explores the dual-process theories of cognition- the popular slow high road of deliberate conscious reasoning and the fast low road of unconscious processing. I would rather have the high road consist of both cognitive and affective factors and similarly the unconscious low road consist of both cognitive and affective factors. Kaufman focuses on the unconscious low road and his factor analysis reveal three factors: Faith in intuition: a meta cognition about ones tendency to use intuition; Holistic intuition: the cognitive factor; and affective intuition: the affective factor. with this in mind let us see what Kaufman’s thesis is:

He first introduces the low road and the high road:

In recent years, dual-process theories of cognition have become increasingly popular in explaining cognitive, personality, and social processes (Evans & Frankish, 2009). Although individual differences in the controlled, deliberate, reflective processes that underlay System 2 are strongly related to psychometric intelligence (Spearman, 1904) and working memory (Conway, Jarrold, Kane, Miyake, & Towse, 2007), few research studies have investigated individual differences in the automatic, associative, nonconscious processes that underlay System 1. Creativity and intelligence researchers might benefit from taking into account dual-process theories of cognition in their models and research, especially when exploring individual differences in nonconscious cognitive processes.

Then he explain LI:

Here I present new data, using a measure of implicit processing called latent inhibition (LI; Lubow, Ingberg-Sachs, Zalstein-Orda, & Gewirtz, 1992). LI reflects the brain’s capacity to screen from current attentional focus stimuli previously tagged as irrelevant (Lubow, 1989). LI is often characterized as a preconscious gating mechanism that automatically inhibits stimuli that have been previously experienced as irrelevant from entering awareness, and those with increased LI show higher levels of this form of inhibition (Peterson, Smith, & Carson, 2002). Variation in LI has been documented across a variety of mammalian species and, at least in other animals, has a known biological basis (Lubow & Gerwirtz, 1995). LI is surely important in people’s everyday lives—if people had to consciously decide at all times what stimuli to ignore, they would quickly become overstimulated.
Indeed, prior research has documented an association between decreased LI and acute-phase schizophrenia (Baruch, Hemsley, & Gray, 1988a, 1988b; Lubow et al., 1992). It is known, however, that schizophrenia is also associated with low executive functioning (Barch, 2005). Recent research has suggested that in highfunctioning individuals (in this case, Harvard students) with high IQs, decreased LI is associated with increased creative achievement (Carson et al., 2003). Therefore, decreased LI may make an individual more likely to perceive and make connections that others do not see and, in combination with high executive functioning, may lead to the highest levels of creative achievement. Indeed, the link between low LI and creativity is part of Eysenck’s (1995) model of creative potential, and Martindale (1999) has argued that a major contributor to creative thought is cognitive disinhibition.

He then relates this to intuition and presents his thesis:

A concept related to LI is intuition. Jung’s (1923/1971, p. 538) original conception of intuition is “perception via the unconscious.” Two of the most widely used measures of individual differences in the tendency to rely on an intuitive information-processing style are Epstein’s Rational- Experiential Inventory (REI; Pacini & Epstein, 1999) and the Myers-Briggs Type Indicator (MBTI) Intuition/Sensation subscale (Myers, McCaulley, Quenk, & Hammer, 1998). Both of these measures have demonstrated correlations with openness to experience (Keller, Bohner, & Erb, 2000; McCrae, 1994; Pacini & Epstein, 1999), a construct that has in turn shown associations with a reduced LI (Peterson & Carson, 2000; Peterson et al., 2002), as well as with divergent thinking (McCrae, 1987) and creative achievement.

The main hypothesis was that intuitive cognitive style is associated with decreased latent inhibition.

He found support for the hypothesis from his data. It seemed people with low LI were high in faith in intuition factor. Here is what he discusses:

The results of the current study suggest that faith in intuition, as assessed by the REI and the MBTI Thinking/Feeling subscale, is associated with decreased LI. Furthermore, a factor consisting of abstract, conceptual, holistic thought is not related to LI. Consistent with Pretz and Totz (2007), exploratory factor analysis revealed a distinction between a factor consisting of REI Experiential and MBTI Thinking/Feeling and a factor consisting of MBTI Intuition/Sensation and REI Rational Favorability. This further supports Epstein’s (1994) theory that the experiential system is directly tied to affect. The finding that MBTI Intuition/Sensation and REI Rational Favorability loaded on the same factor supports the idea that the type of intuition that is being measured by these tasks is affect neutral and more related to abstract, conceptual, holistic thought than to the gut feelings that are part of the Faith in Intuition factor.

Here are the broader implications:

The current study adds to a growing literature on the potential benefits of a decreased LI for creative cognition. Hopefully, with further research on the biological basis of LI, as well as its associated behaviors, including interactions with IQ and working memory, we can develop a more nuanced understanding of creative cognition. There is already promising theoretical progress in this direction.

Peterson et al. (2002) and Peterson and Carson (2000) found a significant relationship between low LI and three personality measures relating to an approach-oriented response and sensation-seeking behavior: openness to experience, psychoticism, and extraversion. Peterson et al. found that a combined measure of openness and extraversion (which was referred to as plasticity) provided a more differentiated prediction of decreased LI.

Peterson et al. (2002) argued that individual differences in a tendency toward exploratory behavior and cognition may be related to the activity of the mesolimbic dopamine system and predispose an individual to perceive even preexposed stimuli as interesting and novel, resulting in low LI. Moreover, under stressful or novel conditions, the dopamine system in these individuals will become more activated and the individual will instigate exploratory behavior. Under such conditions, decreased LI could help the individual by allowing him or her more options for reconsideration and thereby more ways to resolve the incongruity. It could also be disadvantageous in that the stressed individual risks becoming overwhelmed with possibilities. Research has shown that the combination of high IQ and reduced LI predicts creative achievement (Carson et al., 2003). Therefore, the individual predisposed to schizophrenia may suffer from an influx of experiential sensations and possess insufficient executive functioning to cope with the influx, whereas the healthy individual low in LI and open to experience (particularly an openness and faith in his or her gut feelings) may be better able to use the information effectively while not becoming overwhelmed or stressed out by the incongruity of the situation. Clearly, further research will need to investigate these ideas, but an understanding of the biological basis of individual differences in different forms of implicit processing and their relationship to openness to experience and intuition will surely increase our understanding of how certain individuals attain the highest levels of creative accomplishment.

To me this is exciting, the triad of creative/psychotic cognitive style, intuition and Latent Inhibition seem to gel together. the only grip eI have is that the author could also have measured intuition directly by using some insight problems requiring ‘aha’ solutions; maybe that is a project for future!
ResearchBlogging.org
Kaufman, S. (2009). Faith in intuition is associated with decreased latent inhibition in a sample of high-achieving adolescents. Psychology of Aesthetics, Creativity, and the Arts, 3 (1), 28-34 DOI: 10.1037/a0014822

Science 2.0 : what is and what needs to be

Chris Patil , of Ouroboros , and Vivian Siegel have an interesting and thought-provoking op-ed in DMM, on the issue of the promise and the not-so-promising actuality of science 2.0.

They are right when they say that they doubt if science 2.0 wold attract more scientists than the currently active science bloggers and the likes; and I share their skepticism. However, while they believe that all the tools for online collaboration are already in place, I on the other hand think we need a more formalized one-stop system for scientists, where all their sharing, networking and collaborating needs are met. It doesn’t really attract me that much if I have to collaborate using FrinedFeed, share using twitter , learn using google reader, disseminate using blogger, or network using acaedmia.org etc. I am sure a scientific virtual water-cooler will soon emerge , but till that time I am skeptical of actual practicing scientists using science 2.0 in their day-to-day life; of course how the current breed of science bloggers use these tools and the kind of successful collaborations they can demonstrate would easily and likely define the way science 2.0 shapes up. Needless to say I am excited to be part of the early adopters and while twitter/ FF have not lived to their promise, the relatively older sibling of blogging , has managed to land me virtual collaborations, where I am discussing research ideas with persons who actually perform experiments (I am by circumstances an armchair scientist). For an example see comments by Kim on my last post on action selection, which has also led to some offline discussion and a possible future collaboration. For me science 2.0 works perfectly because I am not in the competitive business of being the first to publish a paper or to secure tenure etc and thus can put my ‘ideas to the world’ as freely as they come. At the same time, I am more than aware that the apprehensions scientists have over being stolen from are genuine and need more thought and care while designing the science 2.0 tools.

I will now like to quote some passages from the op-ed that I liked the most.

Suppose that your unique combination of training and expertise leads you to ask a novel question that you are not currently able to address. You advertise your idea to the world, seeking others who might be able to help. You find that Miranda has an idle machine, built for another purpose, that could be modified just so to help answer your question, if only she had a few samples from an appropriate patient. Hugo, busy with clinical responsibilities, has no time, but has a freezer full of biopsy tissues from such patients. Steve has the time and inclination to modify Miranda’s machine and to write the scripts to drive the analysis. Polly watches the whole process to make sure that the study has sufficient statistical power. Correspondence among the interested parties could be recorded in a publicly available forum, along with data and analysis as they emerge – allowing the entire scientific world to look on and to offer advice on the framing of the question, the design of the machine, the processing of the samples and the interpretation of the results.

In other words, what if you could think a thought at the world and have the world think back? What if everyone in the world were in your lab – a ‘hive mind’ of sorts, but composed of countless creative intellects rather than mindless worker ants, and one in which resources, reagents and effort could be shared, along with ideas, in a manner not dictated by institutional and geographical constraints?

What if, in the process, you could do actual scientific research? Granted, it would be research for which no one person (or group) could take credit, but research all the same. Progress might even occur more rapidly than it does in our world, where new knowledge is shared in the form of highly refined distillates of years of work.

I fit perfectly the person who can ask novel questions, experimental suggestions, but lacks expertise / time/ resources/ sanctity to run them. to me this hive mind would be god-send. If only, it could take off!! But then they provide a reality check:

Beyond raising concerns about the philosophy of communication, our utopian fantasy ignores important aspects of human nature. In any real world, finding collaborators would require a great deal more than shooting questions into the void and cocking an ear for the echo. In particular, in order to find a colleague with exactly the right complement of skills, interest and dependability, we need not only openness but trust. Within a laboratory group (at least, in a functional one), trust is part and parcel of lab citizenship; we and our colleagues voluntarily suspend our competitive urges in order to create a cooperative (and mutually beneficial) environment. In the wider world, however, the presumption is reversed: we tend to be cagey and suspicious in our interactions with other scientists. When we step outside the laboratory door, we transform from Musketeers (‘All for one…!’) to Mulder and Scully (‘Trust no one.’).

Oh , how I hate them to have burst my fantasy bubble by providing this reality check!! But thankfully not being bound to any laboratory I am at least immune form this cooperate or compete dilemma. I just hope there are more people like me (or enuff foolish scientists not really bothered about plagiarism) to reach a critical mass and snowball science 2.0. and then they touch on some subtle aspects of the above:

Another clash between utopia and human nature occurs at the level of publicly sharing preliminary data. In particular, during the period of transition between the status quo and the glorious future, openness may be provably irrational from a game-theoretical standpoint. If I share my data but my competitors do not, I’ve laid all of my cards out on the table, whereas others play theirs close to the vest – a bad bet under any circumstances. At best, my openness allows my adversaries to strategize; at worst, it allows them to steal my ideas. Perhaps the term ‘stealing’ is too harsh: in the words of our estimable thesis advisor, Peter Walter, ‘you can’t unthink a thought.’ Once an idea is in the field, can anyone be blamed for reacting to it in a way that is personally optimal? We already live with this moral conundrum every time we agree to review papers and need to balance the expectation of confidentiality with our own desire to shape our own future plans on the basis of the best and most current information. Radical sharing will require ways for individuals to protect themselves from the occasionally deleterious consequences of rational self-interest.

Perhaps most importantly from a practical perspective: information doesn’t share itself. From establishing an open record of preliminary discussions to freely disseminating experimental results, each step in the process requires an infrastructure. A framework, composed of software and web tools, is necessary in order to empower individual scientists to share information without each of them having to write the enabling code from scratch.

The weakest part of the article in my opinion, is when they argue that the tools are already available. I beleive we are still in the early stages of experimenting; new concepts and sites like biomedexperts need to be experimented with and I am sure we will soon be there. The authors suggest several sites where scientists in science 2.0 purportedly hang and then they point to reasons why that model has not succeeded yet:

Social networking tools also suffer from a variant of the ‘no one will go there until everyone goes there’ problem – the ‘me too’ dilution factor. Just as in the social/job space (Facebook, LinkedIn, MySpace, Bebo), there are myriad networks to choose from and many are too similar to distinguish. To a new user with limited time, it’s not obvious whether to try and join multiple networks, arbitrarily choose one, or wait for a clear winner to emerge.

Here’s praying that a clear victor emerges soon!

ResearchBlogging.org
Patil, C., & Siegel, V. (2009). This revolution will be digitized: online tools for radical collaboration Disease Models and Mechanisms, 2 (5-6), 201-205 DOI: 10.1242/dmm.003285

Action-selection and Attention-allocation: a common problem and a common solution?

I have recently blogged a bit about action-selection and operant learning, emphasizing that the action one chooses, out of many possible, is driven by maximizing the utility function associated with the set of possible actions, so perhaps a quick read of last few posts would help appreciate where I come from .

To recap, whenever an organism makes a decision to indulge in an act (an operant behavior), there are many possible actions from which it has to choose the most appropriate one. Each action leads to a possibly different Outcome and the organism may value the outcomes differentially. this valuation may be both objective (how the organism actually ‘likes’ the outcome once it happens, or it may be subjective and based on how keenly the organism ‘wants’ the outcome to happen independent on whether the outcome is pleasurable or not. Also, it is never guaranteed that the action would produce the desired/expected outcome. There is always some probability associated that the act may or may not result in the expected outcome. Also, on a macro level the organism may lack sufficient energy required to indulge in the act or to carry it out successfully to completion. Mathematically, with each action one can associate a utility U= E x V (where U is utility of act; E is expectancy as to whether one would be able to carry the act and if so whether the act would result in desired outcome; and V is the Value (both subjective and objective0 that one has assigned to the outcome. The problem of action-selection then is simply to maximize the utility given different acts n and to choose the action with maximum utility.

Today I had an epiphany; doesn’t the same logic apply to allocating attention to the various stimuli that bombard us. Assuming a spotlight view of attention, and assuming that there are limited attentional resources, one is constantly faced with the problem of finding which stimuli in the world are salient and need to be attended to. Now, the leap I am making is that attention-allocation just like choosing to act volitionally is an operant and not a reactive, but pro-active process. It may be unconscious, but still it involves volition and ‘choosing’. Remember, that even acts can be reactive and thus there is room for reactive attention; but what I am proposing is that the majority of attention is pro-active- actively choosing between stimuli and focusing on one to try and better predict the world. We are basically prediction machines that want to predict beforehand the state of the world that is most relevant to us and this we do by classical or pavlovian conditioning. We try to associate stimuli (CS) with stimuli(UCS) or response (UCR) and thus try to ascertain what state of world at time T would be given that stimulus (CS) has happened. Apart from prediction machines we are also Agents that try to maximize rewards and minimize punishments by acting on this knowledge and acting and interacting with the world. There are thousands of actions we can indulge in- but we choose wisely; there are thousands of stimuli in the external world, but we attend to salient features wisely.

Let me elaborate on the analogy. While selecting an action we maximize reward and minimize punishment, basically we choose the maximal utility function; while choosing which stimuli to attend to we maximize our foreknowledge of the world and minimize surprises, basically we choose the maximal predictability function; we can even write an equivalent mathematical formula: Predictability P = E x R where P is the increase in predictability due to attending to stimulus 1 ; E is probability that stimulus 1 correctly leads to prediction of stimulus 2; and R is the Relevance of stimulus 2(information) to us. Thus the stimulus one would attend, is the one that leads to maximum gain in predictability. Also, similar to the general energy level of organism that would bias as to whether, and how much, the organism acts or not; there is a general arousal level of the organism that biases whether and how much it would attend to stimuli.

So, what new insights do we gain from this formulation? First insight we may gain is by elaborating the analogy further. We know that basal ganglia in particular and dopamine in general is involved in action-selection. Dopamine is also heavily involved in operant learning. We can predict that dopamine systems , and the same underlying mechanisms, may also be used for attention-allocation. Dopamine may also be heavily involved in classical learning as well. Moreover, the basic computations and circuitry involved in allocating attention should be similar to the one involved in action-selection. Both disciplines can learn from each other and utilize methods developed in one field for understanding and elaborating phenomenon in the other filed. For eg; we know that dopamine while coding for reward-error/ incentive salience also codes for novelty and is heavily involved in novelty detection. Is the novelty detection driven by the need to avoid surprises, especially while allocating attention to a novel stimulus.

What are some of the prediction we can make form this model: just like the abundant literature on U= E x V in decision making and action selection literature, we should be able to show the independent and interacting effects of Expectancy and Relevance on attention-grabbing properties of stimulus. The relevance of different stimuli can be manipulated by pairing them with UCR/UCS that has different degrees of relevance. The expectancy can be differentially manipulated by the strength of conditioning; more trials would mean that the association between the CS and UCS is strong; also the level of arousal may bias the ability to attend to stimuli. I am sure that there is much to learn in attention research from the research on decision-making and action-selection and the reverse would also be true. It may even be that attention-allocation is actually conceptualized in the above terms; if so I plead ignorance of knowledge of this sub-field and would love to get a few pointers so that I can refine my thinking and framework.

Also consider the fact that there is already some literature implicating dopamine in attention and the fact that dopamine dysfunction in schizophrenia, ADHD etc has cognitive and attentional implications is an indication in itself. Also, the contextual salience of drug-related cues may be a powerful effect of dapomine based classical conditioning  and attention allocation hijacking the normal dopamine pathways in addicted individuals. 

Lastly, I got set on this direction while reading an article on chaining of actions to get desired outcomes and how two different brain systems ( a cognitive (Prefrontal) high road one based on model-based reinforcement learning and a unconscious low road one (dorsolateral striatal) based on model-free reinforcement learning)may be involved in deciding which action to choose and select. I believe that the same conundrum would present itself when one turns attention to the attention allocation problem, where stimuli are chained together and predict each other in succession); I would predict that there would be two roads involved here too! but that is matter for a future post. for now, would love some honest feedback on what value, if any, this new conceptualization adds to what we already know about attention allocation.

Low Mood and Risk Aversion: a poor State outcome?

Daniel Nettle, writes an article in Journal Of Theoretical Biology about the evolution of low mood states. Before I get to his central thesis, let us review what he reviews:

Low mood describes a temporary emotional and physiological state in humans, typically characterised by fatigue, loss of motivation and interest, anhedonia (loss of pleasure in previously pleasurable activities), pessimism about future actions, locomotor retardation, and other symptoms such as crying.

This paper focuses on a central triad of symptoms which are common across many types of low mood, namely anhedonia, fatigue and pessimism. Theorists have argued that, whereas their opposites facilitate novel and risky behavioural projects. These symptoms function to reduce risk-taking. They do this, proximately, by making the potential payoffs seem insufficiently rewarding (anhedonia), the energy required seem too great (fatigue), or the probability of success seem insufficiently high (pessimism). An evolutionary hypothesis for why low mood has these features, then, is that is adaptive to avoid risky behaviours when one is in a relatively poor current state, since one would not be able to bear the costs of unsuccessful risky endeavors at such times .

I would like to pause here and note how he has beautifully summed up the low mood symptoms and key features; taking liberty to define using my own framework of Value X Expectancy and distinction between cognitive(‘wanting’) and behavioral (‘liking’) side of things :

  • Anhedonia: behavioral inability to feel rewarded by previously pleasurable activities. Loss of ‘liking’ following the act. Less behavioral Value assigned.
  • Loss of motivation and interest: cognitive inability to look forward to or value previously desired activities. Loss of ‘wanting’ prior to the act. Less cognitive Value assigned.
  • Fatigue: behavioral inability to feel that one can achieve the desired outcome due to feelings that one does not have sufficient energy to carry the act to success. Less behavioral Expectancy assigned.
  • Pessimism: cognitive inability to look forward to or expect good things about the future or that good outcomes are possible. Less cognitive Expectancy assigned.

The reverse conglomeration is found in high mood- High wanting and liking, high energy and outlook. Thus, I agree with Nettle fully that low mood and high mood are defined by these opposed features and also that these features of low and high mood are powerful proximate mechanisms that determine the risk proneness of the individual: by subjectively manipulating the Value and Expectancy associated with an outcome, the high and low mood mediate the risk proneness that an organism would display while assigning a utility to the action. Thus, it is fairly settled: if ultimate goal is to increase risk-prone behavior than the organism should use the proximate mechanism of high mood; if the ultimate goal is to avoid risky behavior, then the organism should display low mood which would proximately help it avoid risky behavior.

Now let me talk about Nettle’s central thesis. It has been previously proposed in literature that low mood (and thus risk-aversion) is due to being in a poor state wherein one can avoid energy expenditure (and thus worsening of situation) by assuming a low profile. Nettle plays the devil’s advocate and argues that an exactly opposite argument can be made that the organism in a poor state needs to indulge in high risk (and high energy) activities to get out of the poor state. Thus, there is no a prior reason as to why one explanation may be more sound than the other. To find out when exactly high risk behavior pay off and when exactly low risk behaviors are more optimal, he develops a model and uses some elementary mathematics to derive some conclusions. He, of course , bases his model on a Preventive focus, whereby the organism tries to minimize getting in a state R , which is sub-threshold. He allows the S(t) to be maximized under the constraint that one does not lose sight of R. I’ll not go into the mathematics, but the results are simple. When there is a lot of difference between R (dreaded state) and S (current state), then the organism adopts a risky behavioral profile. when the R and S are close, he maintains low risk behavior, however when he is in dire circumstances (R and S are very close) then risk proneness again rises to dramatic levels. To quote:

The model predicts that individuals in a good state will be prepared to take relatively large risks, but as their state deteriorates, the maximum riskiness of behaviour that they will choose declines until they become highly risk-averse. However, when their state becomes dire, there is a predicted abrupt shift towards being totally risk-prone. The switch to risk-proneness at the dire end of the state continuum is akin to that found near the point of starvation in the original optimal foraging model from which the current one is derived (Stephens, 1981). The graded shift towards greater preferred risk with improving state is novel to this model, and stems from the stipulation that if the probability of falling into the danger zone in the next time step is minimal, then the potential gain in S at the next time step should be maximised. However, a somewhat similar pattern of risk proneness in a very poor state, risk aversion in an intermediate state, and some risk proneness in a better state, is seen in an optimal-foraging model where the organism has not just to avoid the threshold of starvation, but also to try to attain the threshold of reproduction (McNamara et al., 1991). Thus, the qualitative pattern of results may emerge quite generally from models using different assumptions.

Nettle, then extrapolates the clinical significance from this by proposing that ‘agitated’ / ‘excited’ depression can be explained as when the organism is in dire straits and has thus become risk-prone. He also uses a similar logic for dysphoric mania although I don’t buy that. However, I agree that euphoric mania may just be the other extreme of high mood and more risk proneness and goal achievements; while depression the normal extreme of low mood and adverse circumstances and risk aversion. To me this model ties up certain things we know about life circumstances and the risk profile and mood tone of people and contributes to deepening our understanding.
ResearchBlogging.org
Nettle, D. (2009). An evolutionary model of low mood states Journal of Theoretical Biology, 257 (1), 100-103 DOI: 10.1016/j.jtbi.2008.10.033

The bipolar phenotype: Excessive self-regulatory focus?

In my last post I had hinted that bipolar mania and depression may both be characterized by an excessive and overactive self-regulatory focus: with promotion focus being related to Mania and prevention focus being related to depression. It is important to pause and note that the bipolar propensity is towards more self-referential goal-directed activity resulting in excessive use of self-regulatory focus. To clarify, I am sticking my neck out and claiming that depression is marked by an excessive obsession with self-oriented goal directed activities- but with a preventive focus thus focusing more on self’s responsibilities and duties , obligations etc with respect to other near and dear ones. Mania on the other hand, also has excessive self-oriented goal-directed focus, but the focus is promotional with obsession with hopes, aspirations etc, which are relatively more inward-focused and not too much dependent on significant others.

Thus, my characterization of depression as a state where regulatory reference is negative (one is focused on avoiding landing up in a negative end-state like being a burden on others), the regulatory anticipation is negative ( one anticipates pain as a result of almost any act one may perform and thus dreads day-to-day- activity) and the regulatory focus is negative (preventive focus whereby one is more concerned with duties and obligations to perform and security is a paramount need). The entire depressive syndrome can be summed up as an over activity of avoidance based mechanisms. However, please note that still there is an excess of self-referential/self-focused thinking and one is greatly motivated (although might be lacking energy) to bridge the differences between the real self and the ‘ought’ self. One can say that one’s whole life revolves around trying to become the ‘ought’ self, or rather one conceptualizes oneself in terms of the ‘ought’ self.

Contrast this with Mania, where the regulatory reference is positive (one is focused on achieving something grandiose ) , regulatory anticipation is positive (one feels in control and believes that only good things can happen to the self) and regulatory focus is positive (promotional focus whereby one is more concerned with hopes, aspirations etc and growth / actualization needs). Still, juts like in depression there is an excess of focus on self and one is greatly motivated (and also has the energy) to bridge the difference between the real and the ‘ideal’ self. One can say that one’s whole life revolves around trying to become the ‘ideal’ self , or rather one conceptualizes oneslef in terms of an ‘ideal’ self.

What can we predict from above: we know that brain’s default network is involved in self-focused thoughts and ruminations. We can predict, and know for a fact, that the default network is overactive in schizophrenics (and thus by extension in bipolars who I believe have the same underlying pathology, at least as far as psychotic spectrum is concerned)and thus we can say with confidence that indeed the regulatory focus should be high for bipolars and this should be correlated with default network activity. We can also predict that during the Manic phase, the promotion focus related neural network should be more active and in depressive phase the prevention-related areas of the brain should be more active. this last hypothesis still needs experimentation, but lets backtrack a bit and first look at the neural correlates of the promotion and preventive regulatory self-focus.

For this, I refer the readers to an , in my view, important study that tried to dissociate the medial PFC and PCC activity (both of which belong to the default network) while people engaged in self-reflection. Here is the abstract of the study:

Motivationally significant agendas guide perception, thought and behaviour, helping one to define a ‘self’ and to regulate interactions with the environment. To investigate neural correlates of thinking about such agendas, we asked participants to think about their hopes and aspirations (promotion focus) or their duties and obligations (prevention focus) during functional magnetic resonance imaging and compared these self-reflection conditions with a distraction condition in which participants thought about non-self-relevant items. Self-reflection resulted in greater activity than distraction in dorsomedial frontal/anterior cingulate cortex and posterior cingulate cortex/precuneus, consistent with previous findings of activity in these areas during self-relevant thought. For additional medial areas, we report new evidence of a double dissociation of function between medial prefrontal/anterior cingulate cortex, which showed relatively greater activity to thinking about hopes and aspirations, and posterior cingulate cortex/precuneus, which showed relatively greater activity to thinking about duties and obligations. One possibility is that activity in medial prefrontal cortex is associated with instrumental or agentic self-reflection, whereas posterior medial cortex is associated with experiential self-reflection. Another, not necessarily mutually exclusive, possibility is that medial prefrontal cortex is associated with a more inward-directed focus, while posterior cingulate is associated with a more outward-directed, social or contextual focus.

The authors then touch upon something similar to what I have said above, that one can be too much planful or goal-directed (bipolar propensity) , but it would still make sense to find whether the focus is promotional or preventive. To quote:

The idea of variation in individuals’ regulatory focus highlights the difference between agendas and traits; two people could both be described by the trait ‘planful’, but planful about what? A person with a predominantly promotion focus would be more likely to be planful about attaining positive rewards or outcomes, while a person with a predominantly prevention focus would be more likely to be planful about avoiding negative events or outcomes. Although a promotion or prevention focus may dominate, the aspects of the self that are active change dynamically across situations (e.g. Markus and Wurf, 1987), thus most individuals have both promotion and prevention agendas. For example, the same person can hold both the hope of becoming rich (a promotion agenda) and the duty to support an aging parent (a prevention agenda), or the aspiration to be a good citizen and the obligation to be a well-informed voter. As individuals, hopes and aspirations and duties and obligations make up a large part of our mental life and constitute the motivational scaffolding for much of our behaviour.

Now comes the study design:

The present studies investigated neural activity when participants were asked to think about self-relevant agendas related to either a promotion (think about your hopes and aspirations) or prevention (think about your duties and obligations) focus. We compared neural activity associated with thinking about these two different types of self-relevant agendas and with thinking about non-self-relevant topics (distraction). We expected greater activity in anterior and/or posterior medial regions associated with these two self-reflection conditions compared with the distraction control condition because thinking about one’s agendas, like thinking about one’s traits, is self-referential. Such a finding would also be consistent, for example, with Luu and Tucker’s (2004) proposal that both anterior cingulate and posterior cingulate cortex contribute to action regulation by representing goals and expectancies.

And this is what they found:

A double dissociation was found when participants were cued to think about promotion and prevention agendas on different trials for the first time during scanning (Experiment 2) and when they spent several minutes thinking about either promotion or prevention agendas before scanning (Experiment 1), indicating that it results from what participants are thinking about during the scan and not from some general effect (e.g. mood) carried over from the pre-scan period of self-reflection,

Here is what they discuss:

In short, the double dissociation between medial PFC and anterior/inferior medial posterior areas and our two self-reflection conditions indicates that these brain areas serve somewhat different functions during self-focus. There are a number of interesting possibilities that remain to be sorted out. Differential activity in these anterior medial and posterior medial regions as a function of the types of agendas participants were asked to think about could reflect: (i) differences in the representational content in the specific features of agendas, schemas, possible selves and so forth that constitute hopes and aspirations on the one hand and duties and obligations on the other (cf. Luu and Tucker, 2004); (ii) differences in the type(s) of component processes these agendas are likely to engage and/or the representational content they are likely to activate, for example, discovering new possibilities (hopes) vs retrieving episodic memories (e.g. Maddock et al., 2001) of past commitments (duties); (iii) differences in affective significance of hopes and aspirations (attaining the positive) and duties and obligations (avoiding the negative, Higgins, 1997; 1998); (iv) different aspects of the subjective experience of self, such as the subjective experience of control (an instrumental self) vs the subjective experience of awareness (an experiential self; Johnson, 1991; Johnson and Reeder, 1997; compare, e.g. Searle, 1992 and Weiskrantz, 1997, vs Shallice, 1978 and Umilta, 1988); (v) differences in the social significance of hopes and aspirations (more individual) and duties and obligations (involving others). This last possibility is suggested by findings linking the posterior cingulate with taking the perspective of another (Jackson et al., 2006). It may be that thinking about duties and obligations (a more outward focus) tends to involve more perspective-taking than does thinking about hopes and aspirations (a more inward focus). The greater number of mental/emotional references from the promotion group on the pre-scan essay and the tendency for a greater number of references to others from the prevention group are consistent with the hypothesis that medial PFC activity is associated with a more inward focus whereas posterior cingulate/precuneus activity is associated with a more outward, social focus. Clarifying the basis of the similarities and differences between neural activation associated with thinking about hopes and aspirations vs duties and obligations would begin to help differentiate the relative roles of brain regions in different types of self-reflective processing.

They do discuss clinical significance of their studies , but not in terms I would have loved to. I would like to see, whether there is state/trait hyperactivity and dissociation between the mPFC and PCC activation when the variable of depressive episode or manic episode subject is introduced. I’ll place my bets that there would be an interaction between the type of episode and the over activity in the corresponding default-brain regions; but would like to see that data collected.

So my thesis is that the self-reflective and focused default network is overactive in biploar/psychotic spectrum people, but a bias or tilt towards promotion or preventive focus leads to their recurring and periodic episdoes of mania and depression.

Lastly let me touch upon affect in these state and what Higgins had to say about this in his paper covered yesterday. Higgins proposed that bipolar is due to a promotional focus, with mania induced when there is not much mismatch (or awareness of mismatch) between the ideal and real self; while depression or sadness and melancholia induced when one becomes aware of the discrepancy between the ideal and the real self. He proposes that ‘ought’ and real self discrepancy leads to anxiety and nervousness/ agitation; while a preventive focus and congruency between ‘ought’ and real leads to calmness/quiescence.

I disagree with his formulations, in as much as I differentiate between a regulatory focus and the corresponding awareness of discrepancies in that direction. To Higgins they are the same; if someone has a promotional focus , he would also be more aware of the discrepancies between his ideal and real self and thus be saddened. I disagree. I believe that if one has a promotional focus one is driven by goals to make the resl self as close to the ideal self as possible and if one is not able to do so, one would use defense mechanisms to delude oneself , but will not admit to its reality, as the reality of incongruence along the focused dimension is too painful. However, because on is consciously focused on promotions, one would be aware of trade-offs and will acknowledge to himself that his ‘ought’ self, which anyway is not too important for his self-concept, is not congruent to the real self. Thus, one wit a predominant promotion focus may be painfully aware of the discrepancy between his ‘ought’ and real self and thus might be nervous, agitated/ irritable- all symptoms of Mania.

A depressive person on the other hand has a predominant preventive focus and all actions/ ruminations are driven by responsibilities and obligations. Here acknowledging to oneself that one has failed in meeting obligations may be catastrophic so one will try to delude oneself that one is closer to the ‘ought’ self than is the case. However, one may not require any defense mechanisms when judging the discrepancy between the ‘ideal’ and real self as that ‘ideal’ self is no longer a matter of life and death! One would be aware that one is not focusing too much on hopes and aspirations and thus feel despondent/ sad/ melancholic – again classical symptoms of depression. Yet, despite the affect of sadness, all rumination would be focused on ‘ought’ self and thus the content be of guilt, duties, burden, responsibilities, etc.

I’m sure there is some grain of truth in my formulation, but wont be able to state emphatically unless the above proposed dissociation study involving default region and bipolar people is done. If one of you decide to do that, do let me know the results, even if they contradict the thesis.

ResearchBlogging.org
Johnson, M. (2006). Dissociating medial frontal and posterior cingulate activity during self-reflection Social Cognitive and Affective Neuroscience, 1 (1), 56-64 DOI: 10.1093/scan/nsl004
Higgins, E. T. (1997). Beyond pleasure and pain American Psychologist (52), 1280-1300

sandygautam's RSS Feed
Go to Top