(34 comments, 474 posts)
Sandeep Gautam is a psychology and cognitive neuroscience enthusiast, whose basic grounding is in computer science.
Yahoo Messenger: sandygautam
Posts by sandygautam
Chris Patil , of Ouroboros , and Vivian Siegel have an interesting and thought-provoking op-ed in DMM, on the issue of the promise and the not-so-promising actuality of science 2.0.
They are right when they say that they doubt if science 2.0 wold attract more scientists than the currently active science bloggers and the likes; and I share their skepticism. However, while they believe that all the tools for online collaboration are already in place, I on the other hand think we need a more formalized one-stop system for scientists, where all their sharing, networking and collaborating needs are met. It doesn’t really attract me that much if I have to collaborate using FrinedFeed, share using twitter , learn using google reader, disseminate using blogger, or network using acaedmia.org etc. I am sure a scientific virtual water-cooler will soon emerge , but till that time I am skeptical of actual practicing scientists using science 2.0 in their day-to-day life; of course how the current breed of science bloggers use these tools and the kind of successful collaborations they can demonstrate would easily and likely define the way science 2.0 shapes up. Needless to say I am excited to be part of the early adopters and while twitter/ FF have not lived to their promise, the relatively older sibling of blogging , has managed to land me virtual collaborations, where I am discussing research ideas with persons who actually perform experiments (I am by circumstances an armchair scientist). For an example see comments by Kim on my last post on action selection, which has also led to some offline discussion and a possible future collaboration. For me science 2.0 works perfectly because I am not in the competitive business of being the first to publish a paper or to secure tenure etc and thus can put my ‘ideas to the world’ as freely as they come. At the same time, I am more than aware that the apprehensions scientists have over being stolen from are genuine and need more thought and care while designing the science 2.0 tools.
I will now like to quote some passages from the op-ed that I liked the most.
Suppose that your unique combination of training and expertise leads you to ask a novel question that you are not currently able to address. You advertise your idea to the world, seeking others who might be able to help. You find that Miranda has an idle machine, built for another purpose, that could be modified just so to help answer your question, if only she had a few samples from an appropriate patient. Hugo, busy with clinical responsibilities, has no time, but has a freezer full of biopsy tissues from such patients. Steve has the time and inclination to modify Miranda’s machine and to write the scripts to drive the analysis. Polly watches the whole process to make sure that the study has sufficient statistical power. Correspondence among the interested parties could be recorded in a publicly available forum, along with data and analysis as they emerge – allowing the entire scientific world to look on and to offer advice on the framing of the question, the design of the machine, the processing of the samples and the interpretation of the results.
In other words, what if you could think a thought at the world and have the world think back? What if everyone in the world were in your lab – a ‘hive mind’ of sorts, but composed of countless creative intellects rather than mindless worker ants, and one in which resources, reagents and effort could be shared, along with ideas, in a manner not dictated by institutional and geographical constraints?
What if, in the process, you could do actual scientific research? Granted, it would be research for which no one person (or group) could take credit, but research all the same. Progress might even occur more rapidly than it does in our world, where new knowledge is shared in the form of highly refined distillates of years of work.
I fit perfectly the person who can ask novel questions, experimental suggestions, but lacks expertise / time/ resources/ sanctity to run them. to me this hive mind would be god-send. If only, it could take off!! But then they provide a reality check:
Beyond raising concerns about the philosophy of communication, our utopian fantasy ignores important aspects of human nature. In any real world, finding collaborators would require a great deal more than shooting questions into the void and cocking an ear for the echo. In particular, in order to find a colleague with exactly the right complement of skills, interest and dependability, we need not only openness but trust. Within a laboratory group (at least, in a functional one), trust is part and parcel of lab citizenship; we and our colleagues voluntarily suspend our competitive urges in order to create a cooperative (and mutually beneficial) environment. In the wider world, however, the presumption is reversed: we tend to be cagey and suspicious in our interactions with other scientists. When we step outside the laboratory door, we transform from Musketeers (‘All for one…!’) to Mulder and Scully (‘Trust no one.’).
Oh , how I hate them to have burst my fantasy bubble by providing this reality check!! But thankfully not being bound to any laboratory I am at least immune form this cooperate or compete dilemma. I just hope there are more people like me (or enuff foolish scientists not really bothered about plagiarism) to reach a critical mass and snowball science 2.0. and then they touch on some subtle aspects of the above:
Another clash between utopia and human nature occurs at the level of publicly sharing preliminary data. In particular, during the period of transition between the status quo and the glorious future, openness may be provably irrational from a game-theoretical standpoint. If I share my data but my competitors do not, I’ve laid all of my cards out on the table, whereas others play theirs close to the vest – a bad bet under any circumstances. At best, my openness allows my adversaries to strategize; at worst, it allows them to steal my ideas. Perhaps the term ‘stealing’ is too harsh: in the words of our estimable thesis advisor, Peter Walter, ‘you can’t unthink a thought.’ Once an idea is in the field, can anyone be blamed for reacting to it in a way that is personally optimal? We already live with this moral conundrum every time we agree to review papers and need to balance the expectation of confidentiality with our own desire to shape our own future plans on the basis of the best and most current information. Radical sharing will require ways for individuals to protect themselves from the occasionally deleterious consequences of rational self-interest.
Perhaps most importantly from a practical perspective: information doesn’t share itself. From establishing an open record of preliminary discussions to freely disseminating experimental results, each step in the process requires an infrastructure. A framework, composed of software and web tools, is necessary in order to empower individual scientists to share information without each of them having to write the enabling code from scratch.
The weakest part of the article in my opinion, is when they argue that the tools are already available. I beleive we are still in the early stages of experimenting; new concepts and sites like biomedexperts need to be experimented with and I am sure we will soon be there. The authors suggest several sites where scientists in science 2.0 purportedly hang and then they point to reasons why that model has not succeeded yet:
Social networking tools also suffer from a variant of the ‘no one will go there until everyone goes there’ problem – the ‘me too’ dilution factor. Just as in the social/job space (Facebook, LinkedIn, MySpace, Bebo), there are myriad networks to choose from and many are too similar to distinguish. To a new user with limited time, it’s not obvious whether to try and join multiple networks, arbitrarily choose one, or wait for a clear winner to emerge.
Here’s praying that a clear victor emerges soon!
Patil, C., & Siegel, V. (2009). This revolution will be digitized: online tools for radical collaboration Disease Models and Mechanisms, 2 (5-6), 201-205 DOI: 10.1242/dmm.003285
I have recently blogged a bit about action-selection and operant learning, emphasizing that the action one chooses, out of many possible, is driven by maximizing the utility function associated with the set of possible actions, so perhaps a quick read of last few posts would help appreciate where I come from .
To recap, whenever an organism makes a decision to indulge in an act (an operant behavior), there are many possible actions from which it has to choose the most appropriate one. Each action leads to a possibly different Outcome and the organism may value the outcomes differentially. this valuation may be both objective (how the organism actually ‘likes’ the outcome once it happens, or it may be subjective and based on how keenly the organism ‘wants’ the outcome to happen independent on whether the outcome is pleasurable or not. Also, it is never guaranteed that the action would produce the desired/expected outcome. There is always some probability associated that the act may or may not result in the expected outcome. Also, on a macro level the organism may lack sufficient energy required to indulge in the act or to carry it out successfully to completion. Mathematically, with each action one can associate a utility U= E x V (where U is utility of act; E is expectancy as to whether one would be able to carry the act and if so whether the act would result in desired outcome; and V is the Value (both subjective and objective0 that one has assigned to the outcome. The problem of action-selection then is simply to maximize the utility given different acts n and to choose the action with maximum utility.
Today I had an epiphany; doesn’t the same logic apply to allocating attention to the various stimuli that bombard us. Assuming a spotlight view of attention, and assuming that there are limited attentional resources, one is constantly faced with the problem of finding which stimuli in the world are salient and need to be attended to. Now, the leap I am making is that attention-allocation just like choosing to act volitionally is an operant and not a reactive, but pro-active process. It may be unconscious, but still it involves volition and ‘choosing’. Remember, that even acts can be reactive and thus there is room for reactive attention; but what I am proposing is that the majority of attention is pro-active- actively choosing between stimuli and focusing on one to try and better predict the world. We are basically prediction machines that want to predict beforehand the state of the world that is most relevant to us and this we do by classical or pavlovian conditioning. We try to associate stimuli (CS) with stimuli(UCS) or response (UCR) and thus try to ascertain what state of world at time T would be given that stimulus (CS) has happened. Apart from prediction machines we are also Agents that try to maximize rewards and minimize punishments by acting on this knowledge and acting and interacting with the world. There are thousands of actions we can indulge in- but we choose wisely; there are thousands of stimuli in the external world, but we attend to salient features wisely.
Let me elaborate on the analogy. While selecting an action we maximize reward and minimize punishment, basically we choose the maximal utility function; while choosing which stimuli to attend to we maximize our foreknowledge of the world and minimize surprises, basically we choose the maximal predictability function; we can even write an equivalent mathematical formula: Predictability P = E x R where P is the increase in predictability due to attending to stimulus 1 ; E is probability that stimulus 1 correctly leads to prediction of stimulus 2; and R is the Relevance of stimulus 2(information) to us. Thus the stimulus one would attend, is the one that leads to maximum gain in predictability. Also, similar to the general energy level of organism that would bias as to whether, and how much, the organism acts or not; there is a general arousal level of the organism that biases whether and how much it would attend to stimuli.
So, what new insights do we gain from this formulation? First insight we may gain is by elaborating the analogy further. We know that basal ganglia in particular and dopamine in general is involved in action-selection. Dopamine is also heavily involved in operant learning. We can predict that dopamine systems , and the same underlying mechanisms, may also be used for attention-allocation. Dopamine may also be heavily involved in classical learning as well. Moreover, the basic computations and circuitry involved in allocating attention should be similar to the one involved in action-selection. Both disciplines can learn from each other and utilize methods developed in one field for understanding and elaborating phenomenon in the other filed. For eg; we know that dopamine while coding for reward-error/ incentive salience also codes for novelty and is heavily involved in novelty detection. Is the novelty detection driven by the need to avoid surprises, especially while allocating attention to a novel stimulus.
What are some of the prediction we can make form this model: just like the abundant literature on U= E x V in decision making and action selection literature, we should be able to show the independent and interacting effects of Expectancy and Relevance on attention-grabbing properties of stimulus. The relevance of different stimuli can be manipulated by pairing them with UCR/UCS that has different degrees of relevance. The expectancy can be differentially manipulated by the strength of conditioning; more trials would mean that the association between the CS and UCS is strong; also the level of arousal may bias the ability to attend to stimuli. I am sure that there is much to learn in attention research from the research on decision-making and action-selection and the reverse would also be true. It may even be that attention-allocation is actually conceptualized in the above terms; if so I plead ignorance of knowledge of this sub-field and would love to get a few pointers so that I can refine my thinking and framework.
Also consider the fact that there is already some literature implicating dopamine in attention and the fact that dopamine dysfunction in schizophrenia, ADHD etc has cognitive and attentional implications is an indication in itself. Also, the contextual salience of drug-related cues may be a powerful effect of dapomine based classical conditioning and attention allocation hijacking the normal dopamine pathways in addicted individuals.
Lastly, I got set on this direction while reading an article on chaining of actions to get desired outcomes and how two different brain systems ( a cognitive (Prefrontal) high road one based on model-based reinforcement learning and a unconscious low road one (dorsolateral striatal) based on model-free reinforcement learning)may be involved in deciding which action to choose and select. I believe that the same conundrum would present itself when one turns attention to the attention allocation problem, where stimuli are chained together and predict each other in succession); I would predict that there would be two roads involved here too! but that is matter for a future post. for now, would love some honest feedback on what value, if any, this new conceptualization adds to what we already know about attention allocation.
Daniel Nettle, writes an article in Journal Of Theoretical Biology about the evolution of low mood states. Before I get to his central thesis, let us review what he reviews:
Low mood describes a temporary emotional and physiological state in humans, typically characterised by fatigue, loss of motivation and interest, anhedonia (loss of pleasure in previously pleasurable activities), pessimism about future actions, locomotor retardation, and other symptoms such as crying.
This paper focuses on a central triad of symptoms which are common across many types of low mood, namely anhedonia, fatigue and pessimism. Theorists have argued that, whereas their opposites facilitate novel and risky behavioural projects. These symptoms function to reduce risk-taking. They do this, proximately, by making the potential payoffs seem insufficiently rewarding (anhedonia), the energy required seem too great (fatigue), or the probability of success seem insufficiently high (pessimism). An evolutionary hypothesis for why low mood has these features, then, is that is adaptive to avoid risky behaviours when one is in a relatively poor current state, since one would not be able to bear the costs of unsuccessful risky endeavors at such times .
I would like to pause here and note how he has beautifully summed up the low mood symptoms and key features; taking liberty to define using my own framework of Value X Expectancy and distinction between cognitive(‘wanting’) and behavioral (‘liking’) side of things :
- Anhedonia: behavioral inability to feel rewarded by previously pleasurable activities. Loss of ‘liking’ following the act. Less behavioral Value assigned.
- Loss of motivation and interest: cognitive inability to look forward to or value previously desired activities. Loss of ‘wanting’ prior to the act. Less cognitive Value assigned.
- Fatigue: behavioral inability to feel that one can achieve the desired outcome due to feelings that one does not have sufficient energy to carry the act to success. Less behavioral Expectancy assigned.
- Pessimism: cognitive inability to look forward to or expect good things about the future or that good outcomes are possible. Less cognitive Expectancy assigned.
The reverse conglomeration is found in high mood- High wanting and liking, high energy and outlook. Thus, I agree with Nettle fully that low mood and high mood are defined by these opposed features and also that these features of low and high mood are powerful proximate mechanisms that determine the risk proneness of the individual: by subjectively manipulating the Value and Expectancy associated with an outcome, the high and low mood mediate the risk proneness that an organism would display while assigning a utility to the action. Thus, it is fairly settled: if ultimate goal is to increase risk-prone behavior than the organism should use the proximate mechanism of high mood; if the ultimate goal is to avoid risky behavior, then the organism should display low mood which would proximately help it avoid risky behavior.
Now let me talk about Nettle’s central thesis. It has been previously proposed in literature that low mood (and thus risk-aversion) is due to being in a poor state wherein one can avoid energy expenditure (and thus worsening of situation) by assuming a low profile. Nettle plays the devil’s advocate and argues that an exactly opposite argument can be made that the organism in a poor state needs to indulge in high risk (and high energy) activities to get out of the poor state. Thus, there is no a prior reason as to why one explanation may be more sound than the other. To find out when exactly high risk behavior pay off and when exactly low risk behaviors are more optimal, he develops a model and uses some elementary mathematics to derive some conclusions. He, of course , bases his model on a Preventive focus, whereby the organism tries to minimize getting in a state R , which is sub-threshold. He allows the S(t) to be maximized under the constraint that one does not lose sight of R. I’ll not go into the mathematics, but the results are simple. When there is a lot of difference between R (dreaded state) and S (current state), then the organism adopts a risky behavioral profile. when the R and S are close, he maintains low risk behavior, however when he is in dire circumstances (R and S are very close) then risk proneness again rises to dramatic levels. To quote:
The model predicts that individuals in a good state will be prepared to take relatively large risks, but as their state deteriorates, the maximum riskiness of behaviour that they will choose declines until they become highly risk-averse. However, when their state becomes dire, there is a predicted abrupt shift towards being totally risk-prone. The switch to risk-proneness at the dire end of the state continuum is akin to that found near the point of starvation in the original optimal foraging model from which the current one is derived (Stephens, 1981). The graded shift towards greater preferred risk with improving state is novel to this model, and stems from the stipulation that if the probability of falling into the danger zone in the next time step is minimal, then the potential gain in S at the next time step should be maximised. However, a somewhat similar pattern of risk proneness in a very poor state, risk aversion in an intermediate state, and some risk proneness in a better state, is seen in an optimal-foraging model where the organism has not just to avoid the threshold of starvation, but also to try to attain the threshold of reproduction (McNamara et al., 1991). Thus, the qualitative pattern of results may emerge quite generally from models using different assumptions.
Nettle, then extrapolates the clinical significance from this by proposing that ‘agitated’ / ‘excited’ depression can be explained as when the organism is in dire straits and has thus become risk-prone. He also uses a similar logic for dysphoric mania although I don’t buy that. However, I agree that euphoric mania may just be the other extreme of high mood and more risk proneness and goal achievements; while depression the normal extreme of low mood and adverse circumstances and risk aversion. To me this model ties up certain things we know about life circumstances and the risk profile and mood tone of people and contributes to deepening our understanding.
Nettle, D. (2009). An evolutionary model of low mood states Journal of Theoretical Biology, 257 (1), 100-103 DOI: 10.1016/j.jtbi.2008.10.033
In my last post I had hinted that bipolar mania and depression may both be characterized by an excessive and overactive self-regulatory focus: with promotion focus being related to Mania and prevention focus being related to depression. It is important to pause and note that the bipolar propensity is towards more self-referential goal-directed activity resulting in excessive use of self-regulatory focus. To clarify, I am sticking my neck out and claiming that depression is marked by an excessive obsession with self-oriented goal directed activities- but with a preventive focus thus focusing more on self’s responsibilities and duties , obligations etc with respect to other near and dear ones. Mania on the other hand, also has excessive self-oriented goal-directed focus, but the focus is promotional with obsession with hopes, aspirations etc, which are relatively more inward-focused and not too much dependent on significant others.
Thus, my characterization of depression as a state where regulatory reference is negative (one is focused on avoiding landing up in a negative end-state like being a burden on others), the regulatory anticipation is negative ( one anticipates pain as a result of almost any act one may perform and thus dreads day-to-day- activity) and the regulatory focus is negative (preventive focus whereby one is more concerned with duties and obligations to perform and security is a paramount need). The entire depressive syndrome can be summed up as an over activity of avoidance based mechanisms. However, please note that still there is an excess of self-referential/self-focused thinking and one is greatly motivated (although might be lacking energy) to bridge the differences between the real self and the ‘ought’ self. One can say that one’s whole life revolves around trying to become the ‘ought’ self, or rather one conceptualizes oneself in terms of the ‘ought’ self.
Contrast this with Mania, where the regulatory reference is positive (one is focused on achieving something grandiose ) , regulatory anticipation is positive (one feels in control and believes that only good things can happen to the self) and regulatory focus is positive (promotional focus whereby one is more concerned with hopes, aspirations etc and growth / actualization needs). Still, juts like in depression there is an excess of focus on self and one is greatly motivated (and also has the energy) to bridge the difference between the real and the ‘ideal’ self. One can say that one’s whole life revolves around trying to become the ‘ideal’ self , or rather one conceptualizes oneslef in terms of an ‘ideal’ self.
What can we predict from above: we know that brain’s default network is involved in self-focused thoughts and ruminations. We can predict, and know for a fact, that the default network is overactive in schizophrenics (and thus by extension in bipolars who I believe have the same underlying pathology, at least as far as psychotic spectrum is concerned)and thus we can say with confidence that indeed the regulatory focus should be high for bipolars and this should be correlated with default network activity. We can also predict that during the Manic phase, the promotion focus related neural network should be more active and in depressive phase the prevention-related areas of the brain should be more active. this last hypothesis still needs experimentation, but lets backtrack a bit and first look at the neural correlates of the promotion and preventive regulatory self-focus.
For this, I refer the readers to an , in my view, important study that tried to dissociate the medial PFC and PCC activity (both of which belong to the default network) while people engaged in self-reflection. Here is the abstract of the study:
Motivationally significant agendas guide perception, thought and behaviour, helping one to define a ‘self’ and to regulate interactions with the environment. To investigate neural correlates of thinking about such agendas, we asked participants to think about their hopes and aspirations (promotion focus) or their duties and obligations (prevention focus) during functional magnetic resonance imaging and compared these self-reflection conditions with a distraction condition in which participants thought about non-self-relevant items. Self-reflection resulted in greater activity than distraction in dorsomedial frontal/anterior cingulate cortex and posterior cingulate cortex/precuneus, consistent with previous findings of activity in these areas during self-relevant thought. For additional medial areas, we report new evidence of a double dissociation of function between medial prefrontal/anterior cingulate cortex, which showed relatively greater activity to thinking about hopes and aspirations, and posterior cingulate cortex/precuneus, which showed relatively greater activity to thinking about duties and obligations. One possibility is that activity in medial prefrontal cortex is associated with instrumental or agentic self-reflection, whereas posterior medial cortex is associated with experiential self-reflection. Another, not necessarily mutually exclusive, possibility is that medial prefrontal cortex is associated with a more inward-directed focus, while posterior cingulate is associated with a more outward-directed, social or contextual focus.
The authors then touch upon something similar to what I have said above, that one can be too much planful or goal-directed (bipolar propensity) , but it would still make sense to find whether the focus is promotional or preventive. To quote:
The idea of variation in individuals’ regulatory focus highlights the difference between agendas and traits; two people could both be described by the trait ‘planful’, but planful about what? A person with a predominantly promotion focus would be more likely to be planful about attaining positive rewards or outcomes, while a person with a predominantly prevention focus would be more likely to be planful about avoiding negative events or outcomes. Although a promotion or prevention focus may dominate, the aspects of the self that are active change dynamically across situations (e.g. Markus and Wurf, 1987), thus most individuals have both promotion and prevention agendas. For example, the same person can hold both the hope of becoming rich (a promotion agenda) and the duty to support an aging parent (a prevention agenda), or the aspiration to be a good citizen and the obligation to be a well-informed voter. As individuals, hopes and aspirations and duties and obligations make up a large part of our mental life and constitute the motivational scaffolding for much of our behaviour.
Now comes the study design:
The present studies investigated neural activity when participants were asked to think about self-relevant agendas related to either a promotion (think about your hopes and aspirations) or prevention (think about your duties and obligations) focus. We compared neural activity associated with thinking about these two different types of self-relevant agendas and with thinking about non-self-relevant topics (distraction). We expected greater activity in anterior and/or posterior medial regions associated with these two self-reflection conditions compared with the distraction control condition because thinking about one’s agendas, like thinking about one’s traits, is self-referential. Such a finding would also be consistent, for example, with Luu and Tucker’s (2004) proposal that both anterior cingulate and posterior cingulate cortex contribute to action regulation by representing goals and expectancies.
And this is what they found:
A double dissociation was found when participants were cued to think about promotion and prevention agendas on different trials for the first time during scanning (Experiment 2) and when they spent several minutes thinking about either promotion or prevention agendas before scanning (Experiment 1), indicating that it results from what participants are thinking about during the scan and not from some general effect (e.g. mood) carried over from the pre-scan period of self-reflection,
Here is what they discuss:
In short, the double dissociation between medial PFC and anterior/inferior medial posterior areas and our two self-reflection conditions indicates that these brain areas serve somewhat different functions during self-focus. There are a number of interesting possibilities that remain to be sorted out. Differential activity in these anterior medial and posterior medial regions as a function of the types of agendas participants were asked to think about could reflect: (i) differences in the representational content in the specific features of agendas, schemas, possible selves and so forth that constitute hopes and aspirations on the one hand and duties and obligations on the other (cf. Luu and Tucker, 2004); (ii) differences in the type(s) of component processes these agendas are likely to engage and/or the representational content they are likely to activate, for example, discovering new possibilities (hopes) vs retrieving episodic memories (e.g. Maddock et al., 2001) of past commitments (duties); (iii) differences in affective significance of hopes and aspirations (attaining the positive) and duties and obligations (avoiding the negative, Higgins, 1997; 1998); (iv) different aspects of the subjective experience of self, such as the subjective experience of control (an instrumental self) vs the subjective experience of awareness (an experiential self; Johnson, 1991; Johnson and Reeder, 1997; compare, e.g. Searle, 1992 and Weiskrantz, 1997, vs Shallice, 1978 and Umilta, 1988); (v) differences in the social significance of hopes and aspirations (more individual) and duties and obligations (involving others). This last possibility is suggested by findings linking the posterior cingulate with taking the perspective of another (Jackson et al., 2006). It may be that thinking about duties and obligations (a more outward focus) tends to involve more perspective-taking than does thinking about hopes and aspirations (a more inward focus). The greater number of mental/emotional references from the promotion group on the pre-scan essay and the tendency for a greater number of references to others from the prevention group are consistent with the hypothesis that medial PFC activity is associated with a more inward focus whereas posterior cingulate/precuneus activity is associated with a more outward, social focus. Clarifying the basis of the similarities and differences between neural activation associated with thinking about hopes and aspirations vs duties and obligations would begin to help differentiate the relative roles of brain regions in different types of self-reflective processing.
They do discuss clinical significance of their studies , but not in terms I would have loved to. I would like to see, whether there is state/trait hyperactivity and dissociation between the mPFC and PCC activation when the variable of depressive episode or manic episode subject is introduced. I’ll place my bets that there would be an interaction between the type of episode and the over activity in the corresponding default-brain regions; but would like to see that data collected.
So my thesis is that the self-reflective and focused default network is overactive in biploar/psychotic spectrum people, but a bias or tilt towards promotion or preventive focus leads to their recurring and periodic episdoes of mania and depression.
Lastly let me touch upon affect in these state and what Higgins had to say about this in his paper covered yesterday. Higgins proposed that bipolar is due to a promotional focus, with mania induced when there is not much mismatch (or awareness of mismatch) between the ideal and real self; while depression or sadness and melancholia induced when one becomes aware of the discrepancy between the ideal and the real self. He proposes that ‘ought’ and real self discrepancy leads to anxiety and nervousness/ agitation; while a preventive focus and congruency between ‘ought’ and real leads to calmness/quiescence.
I disagree with his formulations, in as much as I differentiate between a regulatory focus and the corresponding awareness of discrepancies in that direction. To Higgins they are the same; if someone has a promotional focus , he would also be more aware of the discrepancies between his ideal and real self and thus be saddened. I disagree. I believe that if one has a promotional focus one is driven by goals to make the resl self as close to the ideal self as possible and if one is not able to do so, one would use defense mechanisms to delude oneself , but will not admit to its reality, as the reality of incongruence along the focused dimension is too painful. However, because on is consciously focused on promotions, one would be aware of trade-offs and will acknowledge to himself that his ‘ought’ self, which anyway is not too important for his self-concept, is not congruent to the real self. Thus, one wit a predominant promotion focus may be painfully aware of the discrepancy between his ‘ought’ and real self and thus might be nervous, agitated/ irritable- all symptoms of Mania.
A depressive person on the other hand has a predominant preventive focus and all actions/ ruminations are driven by responsibilities and obligations. Here acknowledging to oneself that one has failed in meeting obligations may be catastrophic so one will try to delude oneself that one is closer to the ‘ought’ self than is the case. However, one may not require any defense mechanisms when judging the discrepancy between the ‘ideal’ and real self as that ‘ideal’ self is no longer a matter of life and death! One would be aware that one is not focusing too much on hopes and aspirations and thus feel despondent/ sad/ melancholic – again classical symptoms of depression. Yet, despite the affect of sadness, all rumination would be focused on ‘ought’ self and thus the content be of guilt, duties, burden, responsibilities, etc.
I’m sure there is some grain of truth in my formulation, but wont be able to state emphatically unless the above proposed dissociation study involving default region and bipolar people is done. If one of you decide to do that, do let me know the results, even if they contradict the thesis.
Johnson, M. (2006). Dissociating medial frontal and posterior cingulate activity during self-reflection Social Cognitive and Affective Neuroscience, 1 (1), 56-64 DOI: 10.1093/scan/nsl004
Higgins, E. T. (1997). Beyond pleasure and pain American Psychologist (52), 1280-1300
A predominant, but unstated, thinking that biases many research paradigms is the assumption that children are just mini-adults with less well developed mechanisms than adults, but fundamentally using and relying on the same unitary cognitive mechanisms as the adults use. this has proven time and again wrong, and better psychologists now agree that children view the world in a fundamentally different manner from adults. I have covered some research in the past that showed for example that while differentiating between two color hues (categorical color perception), children show a more right hemisphere domination (non-verbal); while adults rely on Left hemisphere (verbal knowledge). Over development the RH processes are shadowed by the maturing LH verbal process, as far as it relates to Categorical Perception.
This recent PNAS article , by none other than the famed Chris Catham of the Developing Intelligence fame, is an effort in the same direction, showing that children use a different mechanism than adults when it comes to using cognitive control. while Adults use a more proactive cognitive control, the children rely on a reactive cognitive control. The authors do a good job of describing the proactive and reactive cognitive control so over to them:
Although sometimes derided as ‘‘creatures of habit,’’ humans develop an unparalleled ability to adaptively control thought and behavior in accordance with current goals and plans. Dominant theories of cognitive control suggest that this flexibility is enabled by the proactive regulation of behavior through sustained inhibition of inappropriate thoughts and actions , the active biasing of task-relevant thoughts, or construction of rule-like representations. Theories of the developmental origins of cognitive control converge in positing that children engage these same proactive processes, but in a weaker form, with less strength or stability , less resistance toward habitual responses, or degraded complexity.
However, children can be notoriously constrained to the present, raising the possibility that the temporal dynamics of immature cognitive control are fundamentally different from that of adults. Specifically, we hypothesized that young children may show ‘‘reactive’’ as opposed to ‘‘proactive’’ context processing , characterized by a failure to proactively prepare for even the predictable future and a tendency to react to events only as they occur, retrieving information from memory as needed in the moment. For lack of age-appropriate methods, the possibility of this qualitative developmental shift has not been directly tested.
They also describe the paradigm used beautifully so again quoting from the article:
In the AX-CPT, subjects provide a target response to a particular probe (‘‘X’’) if it follows a specific contextual cue (‘‘A’’). Nontarget responses are provided to other cue–probe sequences (‘‘A’’ then ‘‘Y,’’ ‘‘B’’ then ‘‘X,’’ or ‘‘B’’ then ‘‘Y’’), each occurring with lower probability than the target pair. This asymmetry in trial type frequency is critical for revealing distinct behavioral profiles for proactive versus reactive control. Proactive control supports good BX trial performance at the expense of AY trials. Maintenance of the ‘‘B’’ cue supports a nontarget response to the subsequent ‘‘X’’ probe; however, maintenance of the ‘‘A’’ cue leads to anticipation of an X and thus a target response (due to the expectancy effect cultivated by the asymmetry in trial type frequencies), which can lead to false alarms in AY trials . Reactive control leads to the opposite pattern. The preceding cue is retrieved when needed, that is, in response to ‘‘X’’ probes but not to ‘‘Y’’ probes. Such retrieval renders BX trials vulnerable to retrieval-based interference; the lack of such retrieval on AY trials means that false alarms are less likely in this case. Similarly, proactive control should lead to increased delay-period effort, whereas reactive control should lead to increased effort to probes.
What they found was consistent with their hypothesis. The reaction time data, the effort data gauged from puppilometry, the speed-accuracy trade off data all pointed to the fact that children used a reactive cognitive control mechanism while adults used a proactive cognitive control mechanism. This what they conclude:
By dissociating proactive and reactive control mechanisms in children, our findings call into question a previously untested assumption of developmental theories of cognitive control, that is, relative to young adults, weaker but qualitatively similar control processes guide the task performance of children. Of course, children and even infants may be capable of sustaining context representations over shorter delays than the 1.2 s used here, but such limited proactive mechanisms would seem unlikely to strongly influence most behaviors.
Further research is needed to determine the processes that drive the developmental transition from reactive to proactive control. This qualitative shift could reflect genuinely qualitative changes, for example, in metacognitive strategies that allow children to engage proactive control. Alternatively (or additionally), the underlying mechanisms for this qualitative shift could be continuous. For example, the gradual strengthening of task-relevant representations could allow proactive control to become effective, thus supporting a shift in the temporal dynamics of control. In any case, the developmental progression to be addressed is a shift from reactive to proactive control rather than merely positing incremental improvements with development.
I think these are steps in the right direction; I lean towards a stage theory account of development so am supportive of a dramatic developmental stage whereby reactive cognitive control mechanisms are replaced by proactive ones, although both strategies may be available to the critical age children equally. However, it may be the case that the neural architecture for proactive CC develops late (just like linguistic CP) and overrides the default reactive CC circuit. that dominance of Proactive CC over reactive CC to me should mark an important developmental stage.
Thanks Chris, for your wonderful blog posts and this paper!
Chatham, C., Frank, M., & Munakata, Y. (2009). Pupillometric and behavioral markers of a developmental shift in the temporal dynamics of cognitive control Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.0810002106