(34 comments, 470 posts)
Sandeep Gautam is a psychology and cognitive neuroscience enthusiast, whose basic grounding is in computer science.
Yahoo Messenger: sandygautam
Posts by sandygautam
A predominant, but unstated, thinking that biases many research paradigms is the assumption that children are just mini-adults with less well developed mechanisms than adults, but fundamentally using and relying on the same unitary cognitive mechanisms as the adults use. this has proven time and again wrong, and better psychologists now agree that children view the world in a fundamentally different manner from adults. I have covered some research in the past that showed for example that while differentiating between two color hues (categorical color perception), children show a more right hemisphere domination (non-verbal); while adults rely on Left hemisphere (verbal knowledge). Over development the RH processes are shadowed by the maturing LH verbal process, as far as it relates to Categorical Perception.
This recent PNAS article , by none other than the famed Chris Catham of the Developing Intelligence fame, is an effort in the same direction, showing that children use a different mechanism than adults when it comes to using cognitive control. while Adults use a more proactive cognitive control, the children rely on a reactive cognitive control. The authors do a good job of describing the proactive and reactive cognitive control so over to them:
Although sometimes derided as ‘‘creatures of habit,’’ humans develop an unparalleled ability to adaptively control thought and behavior in accordance with current goals and plans. Dominant theories of cognitive control suggest that this flexibility is enabled by the proactive regulation of behavior through sustained inhibition of inappropriate thoughts and actions , the active biasing of task-relevant thoughts, or construction of rule-like representations. Theories of the developmental origins of cognitive control converge in positing that children engage these same proactive processes, but in a weaker form, with less strength or stability , less resistance toward habitual responses, or degraded complexity.
However, children can be notoriously constrained to the present, raising the possibility that the temporal dynamics of immature cognitive control are fundamentally different from that of adults. Specifically, we hypothesized that young children may show ‘‘reactive’’ as opposed to ‘‘proactive’’ context processing , characterized by a failure to proactively prepare for even the predictable future and a tendency to react to events only as they occur, retrieving information from memory as needed in the moment. For lack of age-appropriate methods, the possibility of this qualitative developmental shift has not been directly tested.
They also describe the paradigm used beautifully so again quoting from the article:
In the AX-CPT, subjects provide a target response to a particular probe (‘‘X’’) if it follows a specific contextual cue (‘‘A’’). Nontarget responses are provided to other cue–probe sequences (‘‘A’’ then ‘‘Y,’’ ‘‘B’’ then ‘‘X,’’ or ‘‘B’’ then ‘‘Y’’), each occurring with lower probability than the target pair. This asymmetry in trial type frequency is critical for revealing distinct behavioral profiles for proactive versus reactive control. Proactive control supports good BX trial performance at the expense of AY trials. Maintenance of the ‘‘B’’ cue supports a nontarget response to the subsequent ‘‘X’’ probe; however, maintenance of the ‘‘A’’ cue leads to anticipation of an X and thus a target response (due to the expectancy effect cultivated by the asymmetry in trial type frequencies), which can lead to false alarms in AY trials . Reactive control leads to the opposite pattern. The preceding cue is retrieved when needed, that is, in response to ‘‘X’’ probes but not to ‘‘Y’’ probes. Such retrieval renders BX trials vulnerable to retrieval-based interference; the lack of such retrieval on AY trials means that false alarms are less likely in this case. Similarly, proactive control should lead to increased delay-period effort, whereas reactive control should lead to increased effort to probes.
What they found was consistent with their hypothesis. The reaction time data, the effort data gauged from puppilometry, the speed-accuracy trade off data all pointed to the fact that children used a reactive cognitive control mechanism while adults used a proactive cognitive control mechanism. This what they conclude:
By dissociating proactive and reactive control mechanisms in children, our findings call into question a previously untested assumption of developmental theories of cognitive control, that is, relative to young adults, weaker but qualitatively similar control processes guide the task performance of children. Of course, children and even infants may be capable of sustaining context representations over shorter delays than the 1.2 s used here, but such limited proactive mechanisms would seem unlikely to strongly influence most behaviors.
Further research is needed to determine the processes that drive the developmental transition from reactive to proactive control. This qualitative shift could reflect genuinely qualitative changes, for example, in metacognitive strategies that allow children to engage proactive control. Alternatively (or additionally), the underlying mechanisms for this qualitative shift could be continuous. For example, the gradual strengthening of task-relevant representations could allow proactive control to become effective, thus supporting a shift in the temporal dynamics of control. In any case, the developmental progression to be addressed is a shift from reactive to proactive control rather than merely positing incremental improvements with development.
I think these are steps in the right direction; I lean towards a stage theory account of development so am supportive of a dramatic developmental stage whereby reactive cognitive control mechanisms are replaced by proactive ones, although both strategies may be available to the critical age children equally. However, it may be the case that the neural architecture for proactive CC develops late (just like linguistic CP) and overrides the default reactive CC circuit. that dominance of Proactive CC over reactive CC to me should mark an important developmental stage.
Thanks Chris, for your wonderful blog posts and this paper!
Chatham, C., Frank, M., & Munakata, Y. (2009). Pupillometric and behavioral markers of a developmental shift in the temporal dynamics of cognitive control Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.0810002106
The hedonic principle says that we are motivated to approach pleasure and avoid pain. This, as per Higgins is too simplistic a formulation. He supplants this with his concepts of regulatory focus, regulatory anticipation and regulatory reference. That is too much of jargon for a single post, but let us see if we can make sense.
First, let us conceptualize a desired end-state that an organism wants to be in- say eating food and satisfying hunger. This desired end-state becomes the current goal of the organism and leads to gold-directed behavior. Now, it is proposed that given this desired end-state, the organism has two ways to go about achieving or moving towards the end-state. If the organism has promotion or achievement self-regulation focus, then it will be more sensitive to whether the positive outcome is achieved or not and will thus have an approach orientation whereby it would try to match his next state to the desired state or try approaching the desired end-sate as close as possible. On the other hand, if the organism has a prevention or safety self-regulation focus, then it will be more sensitive to the negative outcome as to whether it becomes worse off after the behavior and will have an avoidance orientation whereby it would try to minimize the mismatch between his next state and the desired state. Thus given n next states with different food availability , the person with promotion focus will choose a next state that is as close, say within a particular threshold, to the desired state of satiety ; while the person with the prevention focus will be driven by avoiding all the sates that have a sub-threshold food availability and are thus mis-matched with the end-goal of satiety. thus, the number and actual states which are available for choosing form are different for the two groups: the first set is derived from whether the states are within a particular range of the end-state; the second set is derived from excluding all the states that are not within a particular range of the end-state. Put this way it is easy to see, that these strategies of promotion or prevention focus, place different cognitive and computational demands: the former requires explortation/ maximizing, the other may be satisfied by satisficing. (see my earlier post on exploration/ exploitation and satisficers / maximisers where I believe I was slightly mistaken).
Now, that I have explained in simple terms (hopefully) the concepts of self-regulatory focus, let me quote from the article and show how Higgins arrives at the same.
The theory of self-regulatory focus begins by assuming that the hedonic principle should operate differently when serving fundamentally different needs, such as the distinct survival needs of nurturance (e.g., nourishment) and security (e.g., protection). Human survival requires adaptation to the surrounding environment, especially the social environment (see Buss, 1996). To obtain the nurturance and security that children need to survive, children must establish and maintain relationships with caretakers who provide them with nurturance and security by supporting, encouraging, protecting, and defending them (see Bowlby, 1969, 1973). To make these relationships work, children must learn how their appearance and behaviors influence caretakers’ responses to them (see Bowlby, 1969; Cooley, 1902/1964; Mead, 1934; Sullivan, 1953). As the hedonic principle suggests,children must learn how to behave in order to approach pleasure and avoid pain. But what is learned about regulating pleasure and pain can be different for nurturance and security needs. Regulatory-focus theory proposes that nurturance-related regulation and security-related regulation differ in regulatory focus. Nurturance-related regulation involves a promotion focus, whereas security related regulation involves a prevention focus.
People are motivated to approach desired end-states, which could be either promotion-focus aspirations and accomplishments or prevention-focus responsibilities and safety. But within this general approach toward desired end-states, regulatory focus can induce either approach or avoidance strategic inclinations. Because a promotion focus involves a sensitivity to positive outcomes (their presence and absence), an inclination to approach matches to desired end-states is the natural strategy for promotion self-regulation. In contrast, because a prevention focus involves a sensitivity to negative outcomes (their absence and presence), an inclination to avoid mismatches to desired end-states is the natural strategy for prevention self-regulation (see Higgins, Roney, Crowe, & Hymes, 1994).
Figure 1 (not shown here, go read the article for the figure) summarizes the different sets of psychological variables discussed thus far that have distinct relations to promotion focus and prevention focus (as well as some variables to be discussed later). On the input side (the left side of Figure 1), nurturance needs, strong ideals, and situations involving gain-nongain induce a promotion focus, whereas security needs, strong oughts, and situations involving nonloss-loss induce a prevention focus. On the output side (the right side of Figure 1), a promotion focus yields sensitivity to the presence or absence of positive outcomes and approach as strategic means, whereas a prevention focus yields sensitivity to the absence or presence of negative outcomes and avoidance
as strategic means.
Higgins then goes on describing many experiments that support this differential regulations focus and how that is different from pleasure-pain valence based approaches. He also discusses the regulatory focus in terms of signal detection theory and here it is important to note that promotion focus leads to leaning towards (being biased towards) increasing Hits and reducing Misses ; while prevention focus means leaning more towards increasing correct rejections and reducing or minimizing false alarms. Thus,a promotion focus individual is driven by finding correct answers and minimizing errors of omission; while a preventive focused person is driven by avoiding incorrect answers and minimizing errors of commission. In Higgin’s words:
Individuals in a promotion focus, who are strategically inclined to approach matches to desired end-states, should be eager to attain advancement and gains. In contrast, individuals in a prevention focus, who are strategically inclined to avoid mismatches to desired end-states, should be vigilant to insure safety and nonlosses. One would expect this difference in self-regulatory state to be related to differences in strategic tendencies. In signal detection terms (e.g., Tanner & Swets, 1954; see also Trope & Liberman, 1996), individuals in a state of eagerness from a promotion focus should want, especially, to accomplish hits and to avoid errors of omission or misses (i.e., a loss of accomplishment). In contrast, individuals in a state of vigilance from a prevention focus should want, especially, to attain correct rejections and to avoid errors of commission or false alarms (i.e., making a mistake). Therefore, the strategic tendencies in a promotion focus should be to insure hits and insure against errors of omission, whereas in a prevention focus, they should be to insure correct rejections and insure against errors of commission .
He next discusses Expectancy x Value effects in utility research. Basically , whenever one tries to decide between two or more alternative actions/ outcomes, one tries to find the utility of a particular decision/ behavioral act based on both the value and expectance of the outcome. Value means how desirable or undesirable (i.e what value is attached) that outcome is to that person. Expectancy means how probable it is that the contemplated action (that one is deciding to do) would lead to the outcome. By way of an example: If I am hungry, I want to eat food. Lets say there are two actions or decisions that have different utility that can lead to my hunger reduction. The first involves begging for food from the shopkeeper; the second involves stealing the food from the shopkeeper. The first may be having positive value (begging might not be that embarrassing) , but low expectancy (the shopkeeper is miserly and unsympathetic) ; while the second act may have negative value (I believe that stealing is wrong and would like to avoid that act) but high expectancy (I am sure I’ll be able to steal the food and fulfill my hunger). the utility I impart to the two acts may determine what act I eventually decide to indulge in.
Higgins touches on research that showed that Expectancy X value have a multiplicative effect i.e as expectancy increases, and value increases the motivation to take that decision/ course of action increases non-linearly. He clarifies that this interaction effect is seen in promotion focus , but not in preventive focus:
Expectancy-value models of motivation assume not only that expectancy and value have an impact on goal commitment as independent variables but also that they combine multiplicatively (Lewin, Dembo, Festinger, & Sears, 1944; Tolman, 1955; Vroom, 1964; for a review, see Feather, 1982). The multiplicative assumption is that as either expectancy or value increases, the impact of the other variable on commitment increases. For example, it is assumed that the effect on goal commitment of higher likelihood of goal attainment is greater for goals of higher value. This assumption reflects the notion that the goal commitment involves a motivation to maximize the product of value and expectancy, as is evident in a positive interactive effect of value and expectancy. This maximization prediction is compatible with the hedonic or pleasure principle because it suggests that people are motivated to attain as much pleasure as possible.
Despite the almost universal belief in the positive interactive effect of value and expectancy, not all studies have found this effect empirically (see Shah & Higgins, 1997b). Shah and Higgins proposed that differences in the regulatory focus of decision makers might underlie the inconsistent findings in the literature. They suggested that making a decision with a promotion focus is more likely to involve the motivation to maximize the product of value and expectancy. A promotion focus on goals as accomplishments should induce an approach-matches strategic inclination to pursue highly valued goals with the highest expected utility, which maximizes Value × Expectancy. Thus, the positive interactive effect of value and expectancy assumed by classic expectancy-value models should increase as promotion focus increases.
But what about a prevention focus? A prevention focus on goals as security or safety should induce an avoid-mismatches strategic inclination to avoid all unnecessary risks by striving to meet only responsibilities that are clearly necessary. This strategic inclination creates a different interactive relation between value and expectancy. As the value of a prevention goal increases, the goal becomes a necessity, like the moral duties of the Ten Commandments or the safety of one’s child. When a goal becomes a necessity, one must do whatever one can to attain it, regardless of the ease or likelihood of goal attainment. That is, expectancy information becomes less relevant as a prevention goal becomes more like a necessity. With prevention goals, motivation would still generally increase when the likelihood of goal attainment is higher, but this increase would be smaller for high-value goals (i.e., necessities) than low-value goals. Thus, the second prediction was that the positive interactive effect of value and expectancy assumed by classic expectancy value models would not be found as prevention focus increased. Specifically, as prevention focus increases, the interactive effect of value and expectancy should be negative.
And that is exactly what they found! the paper touches on many other corroborating readers and the interested reader can go to the source for more. Here I will now focus on his concepts of regulatory expectancy and regulatory reference.
Regulatory Reference is the tendency to be either driven by positive and desired end-states as a reference end-point and a goal; or to be driven by negative and undesired end-states as goals that are most prominent. For example, eating food is a desirable end-state; while being eaten by others is a undesired end-sate. now an organism may be driven by the end-sate of ‘getting food’ and thus would be regulating approach behavior of how to go about getting food. It is important to contrast this with regulatory focus; while searching for food, it may have promotion orientation focusing on matching the end state; or may have prevention focus i.e avoiding states that don’t contain food; but it is still driven by a ‘positive’ or desired end-state. On the other hand, when the regulatory reference is a negative or undesirable end-state like ‘becoming food’, then avoidance behavior is regulated i.e. behavior is driven by avoiding the end-state. Thus, any state that keeps one away from ‘being eaten’ is the one that is desired; this may involve promotion focus as in approaching states that are opposite of the undesired state and provide safety from predator; or it may have a prevention focus as in avoiding states that can lead one closer to the undesired end-state. In words of Higgins:
Inspired by these latter models in particular, Carver and Scheier (1981, 1990) drew an especially clear distinction between self-regulatory systems that have positive versus negative reference values. A self-regulatory system with a positive reference value has a desired end state as the reference point. The system is discrepancy reducing and involves attempts to move one’s (represented) current self-state as close as possible to the desired end-state. In contrast, a self-regulatory system with a negative reference value has an undesired end-state as the reference point. This system is discrepancy-amplifying and involves attempts to move the current self-state as far away as possible from the undesired end-state.
To me Regulatory Reference is similar to Value associated with a utility decision and determines whether when we are choosing between different actions/ goals , the end-states or goals have a positive connotation or a negative connotation.
That brings us to Regulatory anticipation: that is the now well-known Desire/ dread functionality of dopamine mediated brain regions that are involved in anticipation of pleasure and pain and drive behavior. This anticipation of pleasure or pain is driven by our Expectancies of how our actions will yield the desired/undesired outcomes and can be treated as the equivalent to Expectancy in the Utility decisions. The combination of independent factors of regulatory reference and regulatory anticipation will drive what end-state or goal is activated to be the next target for the organism. Once activated, its tendencies towards promotion focus or prevention focus would determine how it strategically uses approach/ avoidance mechanisms to archive that goal or move towards the end-state. Let us also look at regulatory anticipation as described by higgins:
Freud (1920/1950) described motivation as a “hedonism of the future.” In Beyond the Pleasure Principle (Freud, 1920/1950), he postulated that people go beyond total control of the “id” that wants to maximize pleasure with immediate gratification to regulating as well in terms of the “ego” or reality principle that avoids punishments from norm violations. For Freud, then, behavior and other psychical activities were driven by anticipations of pleasure to be approached (wishes) and anticipations of pain to be avoided (fears). Lewin (1935) described how the “prospect” of reward or punishment is involved in children learning to produce or suppress, respectively, certain specific behaviors (see also Rotter, 1954). In the area of animal learning, Mowrer (1960) proposed that the fundamental principle underlying motivated learning was regulatory anticipation, specifically, approaching hoped-for desired end-states and avoiding feared undesired endstates. Atkinson’s (1964) personality model of achievement motivation also proposed a basic distinction between self-regulation in relation to “hope of success” versus “fear of failure.” Wicker, Wiehe, Hagen, and Brown (1994) extended this notion by suggesting that approaching a goal because one anticipates positive affect from attaining it should be distinguished from approaching a goal because one anticipates negative affect from not attaining it. In cognitive psychology, Kahneman and Tversky’s (1979) “prospect theory” distinguishes between mentally considering the possibility of experiencing pleasure (gains) versus the possibility of experiencing pain (losses).
Why I have been dwelling on this and how this fits into the larger framework: Wait for the next post, but the hint is that I believe that bipolar mania as well as depression is driven by too much goal-oriented activity- in mania the focus being promotion; while in depression the focus being preventive; Higgins does discuss mania and depression in his article, but my views differ and would require a new and separate blog post. Stay tuned!
A lot has already been written in the blogosphre regarding this study that found the brain regions that are involved in first impression formation. I view the study from a slightly different angle , but first let me introduce the study and its main findings.
The study was focused on finding the brain regions that are involved in the impression formation of a new social entity. We all know that we form automatic and consistent first impressions of strangers we meet based on things like their face to the social information that is available about them. The authors theorized that to know which regions of the brain are involved in evaluating a person for the first time, it would be sufficient to know which regions of the brain were engaged more while the evaluation-consistent information was being processed. To understand this logic, consider the brain regions involved in memory and how they are discovered. Typically, a series of words/images to be remembered are presented to the subjects, while simultaneously their brain are imaged. Later a memory recall/recognition test is administered. It is found that some brain regions are consistently more active during encoding of the original stimuli which are later recalled/ recognized correctly. This effect is know as Difference in Memory effect (DM effect). the fact that these areas are differentially engaged during encoding of remembered stimuli as opposed to forgotten stimuli is taken as evidence for the fact that these brain regions are involved in encoding of memory. Similar to this effect, it is found that evaluations that are consistent with the later overall evaluation of the person engage some brain regions more than when the evaluation is inconsistent with the later overall evaluation. This difference in evaluation effect (DE ) can be used to locate the regions that are involved in social evaluation or formation of first impressions.
Previous studies had indicated that dmPFC was engaged in social evaluation; however many cognitive factors other than purely evaluative factors might be in action here.
It has also been indicated that amygdala is involved in both social evaluation and valence based evaluations and might be involved in these first impression formation. So the authors hypothesized that they would find differential activity in amygdala in consistent as opposed to inconsistent evaluations and this is what they actually observed. They also found that PCC was also differentially engaged while forming first impressions and thus was another brain region involved in evaluating others.
Here is the study design:
To test these hypotheses, we developed the difference in evaluation procedure (see Figure), allowing us to sort social information encoding trials by subsequent evaluations. More specifically, we measured blood oxygenation level–dependent (BOLD) signals using whole brain fMRI during exposure to different person profiles. Each profile consisted of 6 person-descriptive sentences implying different personality traits. The sentences varied gradually in their positive to negative valence (or vice versa) but evoked equivalent levels of arousal. A 12-s interval with the face alone separated the positive and the negative segments. Subsequently, an evaluation slide instructed subjects to form their impression on an 8-point scale. On the basis of these evaluations, we determined which of the presented descriptive sentences guided evaluations (evaluation relevant) and which did not (evaluation irrelevant). For example, if a subject’s evaluation was positive, we assigned the positive segment of the profile to the evaluation-relevant category and the negative segment to the evaluation-irrelevant category. We then identified the brain regions dissociating items from each category (that is, difference in evaluation effect). Notably, we correlated subjects’ BOLD signal with their own individual evaluations. This allowed us to identify brain regions that were consistent across subjects in processing evaluation-relevant information regardless of the particular stimuli that they considered. Immediately after the scanning session, subjects underwent a memory-recognition task.
The results were clear and found that while dmPFC was involved in social evaluations it was not differentially engaged: thus it had a general role to play, perhaps holding the representation of evaluation after it had already been formed; in contrast both amygdala and PCC were differentially recruited and thus underlie the first time evaluations. In the words of the authors:
Understanding the neural substrates of social cognition has been one of the core motivations driving the burgeoning field of social neuroscience. A number of studies have highlighted the dmPFC in the processing of social information. Our results provide further evidence that the dmPFC is recruited to process person-descriptive information during impression formation. However, BOLD responses in this region do not dissociate evaluation-relevant from evaluation-irrelevant information, suggesting that the dmPFC is not essential for the evaluative component of impression formation. In fact, social evaluation recruits brain regions that are not socially specialized but are more generally involved in valuation and emotional processes.
Valuation and emotional processes, as a substantial amount of research has shown, are characteristic of the amygdala. In particular, the amygdala is considered to be a crucial region in learning about motivationally important stimuli. It is also implicated in social inferences that are based on facial and bodily expressions, in inferences of trustworthiness and in the capacity to infer social attributes. Moreover, the involvement of amygdala in social inferences might be independent of awareness or explicit memory. For example, increased amygdala responses were correlated with implicit, but not explicit, measures of the race bias, as well as with presentation of faces previously presented in an emotional, but not neutral, context, regardless of whether subjects could explicitly retrieve this information. Here we provide evidence linking the two domains of affective learning and social processing by showing that the amygdala is engaged in the formation of subjective value assigned to another person in a social encounter.
Although the amygdala is typically implicated in the processing of negative affect and negative stimuli have been shown to modulate it more than positive stimuli, we found that the amygdala processed both positive and negative evaluation-relevant information, suggesting that amygdala activity is driven by factors other than mere valence, such as the motivational importance or salience of the stimuli. This result is consistent with recent findings showing enhanced amygdala responses for both positive and negative stimuli as a function of motivational importance.
Evidence related to the PCC has been more diverse. There have been reports in the social domain, such as involvement in theory of mind and self-referential outward-focused thought33, in memory related processes such as autobiographical memory of family and friends34, and in emotional modulation of memory and attention. More recently, the PCC has been linked with economic decision making, the assignment of subjective value to rewards under risk and uncertainty, and credit assignment in a social exchange. A common denominator of these studies might be that all involved either a social or an outward-directed valuation component. Our task also encompasses these features, extending the role of the PCC to value assignment to social information guiding our first impressions of others.
The amygdala and the PCC are both interconnected with the thalamus as part of a larger circuitry that is implicated in emotion, arousal and learning. Beyond the known role of the amygdala and the PCC in social-information processing and value representation, our results suggest a neural mechanism underlying the online formation of first impressions. When encoding everyday social information during a social encounter, these regions sort information on the basis of its personal and subjective importance and summarize it into an ultimate score, a first impression. Other regions, such as the ventromedial PFC, the striatum and the insula, have also been implicated in valuation processes. However, these regions did not emerge in our difference in evaluation effect analysis. This might suggest a possible dissociation in the valuation network between regions engaged in the formation of value and its subsequent representation and updating. The latter regions would not be engaged during encoding and therefore would not show a difference in evaluation effect but would instead have an effect once the evaluation is formed. The amygdala and the PCC probably participate in both value formation and its representation. The difference in evaluation procedure may provide a useful tool for disentangling the different components of the valuation system and their specific contributions to social versus nonsocial evaluations.
Now I would like to link all this new research with an earlier research on face attributes that found that there were two orthogonal factors that characterize a face- trustworthiness (valence) and dominance. It is important to note that faces are an important mechanism by which we make snap judgments and if it has been found that there are two orthogonal dimensions (found using factor analysis) on which we judge faces and form rifts impressions, there is no reason to suppose that those same two orthogonal factors would not come into play when we form first impressions based on social information and not the face. What I am trying to say is that the non-face social information driven social evaluation would still be structured around the factors of whether the social information pointed to the person as Trustworthy or as Dominant. I would expect that there would be different brain regions specialized for these two functions: We all know too clearly that amygdala is specialized for trustworthiness judgments and that fits in with one of the areas that has been identified for snap judgments. thta leaves us with the PCC, which has normally been implicated in self-referential thinking with an outward and evaluative (as opposed to inward and executive) focus and also a preventive focus. It seems likely that this region would be used to evaluate a social other and judge as to whether he has the ability to execute, harm and dominate oneself. So, what I would like to see is a study that dissociates the scoial information provided to subjects in terms of trustworthiness and dominance factors and sees if there is a dissociation in the evaluative regions of amygdala and PCC; or maybe one can juts factor analyze the results of the original study and see if the same two factors emerge! I am excited,and would love to see these studies being preformed!!
Schiller, D., Freeman, J., Mitchell, J., Uleman, J., & Phelps, E. (2009). A neural mechanism of first impressions Nature Neuroscience DOI: 10.1038/nn.2278
Oosterhof, N., & Todorov, A. (2008). The functional basis of face evaluation Proceedings of the National Academy of Sciences, 105 (32), 11087-11092 DOI: 10.1073/pnas.0805664105
So I thought I’ll link these for the benefit of my readers. While it may sound an oxymoron to do a review of a review, let me briefly summarize the review article.
The article lists three important contributing factors for depression. The first is genetics; the second childhood stress and the third ongoing or recent psychosocial stress. And of course different neurobiological mechanisms underlie all three factors.
To take by way of example, we have the famed monoamine theory of depression whereby low baseline serotonin (and norepenipherine) levels in the brain are held responsible for depressive symptoms. This hypothesis derives most evidence from the effects of anti-depressants on the brain. Now depression also has a genetic heritable component (this is apparent from twin studies); some of the heritability of depression can be explained by polymorphisms of various genes affecting the serotonin system, primary among them being the gene affecting Serotonin Transporter or SERT. thus, the underlying serotonin system can be treated as one biological system that has a strong genetic component.
To take by way of second example, consider the hypothalamic-pituitary-adrenal axis that is involved in response to stress. This system is abnormally developed if the child is exposed to stress in a critical developmental window. Experiments with rats and monkeys confirm that abnormal and stressful environment during early childhood, leads to abnormal functioning of this axis, that later pre-disposes to depression. thus, this HPA axis may be taken as a proxy for the component that is due to development and epigenetics.
To take by way of third example, consider the Brain Derived Neurotropic factor in the brain. This BDNF is responsible for survival of new neurons and for new synapse formation (synaptic plasticity) during adulthood; new neurons and new synapses help us to learn (by neurogenisis in the hippocampus), especially when the environment is stressful; now there are two polymorphisms of the gene coding for BDNF; the ‘MET’ allele cause reduced hippocampal volume at birth, hypoactivity in resting state in hippocampus, increased metabolism in hippocampus while learning and relatively poor hiipocampal dependent memory-function. From all this it is apparent that MET allele somehow leads to less synthesis of BDNF and thus low learning in hippocampus as a result of reduced neurogenesis / synaptogenesis. Now, the same MET allele also raises the risk of depressionand the mediating factor is the stress responsivity of the individual. Thus, the BDNF may mediate the sensitivity of a person to the same external psychosocial stress and might be very crucial via the gene-environment interaction effects. Also prolonged stress, which may result in prolonged BDNF secretions and thus lead to toxicity and opposite paradoxical effects may be another putative mechanism linkibng stress exposure in adulthood to underlying pathophysiology of reduced neurogenesis.
The above may seem too simplistic but it points us in the right direction- some neurobiological systems like the serotonin system may be largely genetic in nature and our treatment approaches based around this fact. Others like the HPA axis malfunctioning may be entirely environmental in their origin, and maybe preventive interventions like ensuring stress free childhood for all, should be the policy focus here. Depending on the plasticity of later HPA axis, therapy or medications may be the treatment options. Finally, other neurobiological systems involved, like the BDNF and stress sensitivity/over-exposure, may display complex gene-environment interactions and again knowing the nature of these systems will help us counter the symptoms using a combination of CBT/ medication.
Depression is definitely a much complex disorder to be completely understood on the basis of a single review article, or even a series of blog posts, , but the underlying neurobiological mechanisms and systems clearly indicate how genetics, environment (especially during critical developmental window) and and epigenetics (gene-environment interactions) are involved in its etiology and how different interventions and treatments taking these into account have to be developed.
aan het Rot, M., Mathew, S., & Charney, D. (2009). Neurobiological mechanisms in major depressive disorder Canadian Medical Association Journal, 180 (3), 305-313 DOI: 10.1503/cmaj.080697
There is a new study in PLOS One that argues that we make reality-fictional distinction on the basis of how personally relevant the event in question is. To be fair, the study focuses on fictional, famous or familiar (friends and family) entities like Cinderella, Obama or our mother and based on the fact that these are arranged in increasing order of personal relevance, as well as represent fictional and real characters, tries to show that one of the means by which we try to distinguish fictional from real characters is by the degree of personal relevance these characters are able to invoke in us.
The authors build upon their previous work that showed that amPFC(anterior medial prefrontal cortex) and PCC (Posterior Cingulate cortex), which are part of the default brain network, are differentially recruited when people are exposed to contexts involving real as opposed to fictional entities. From this neural correlate of the regions involved in distinguishing fiction from reality, and from the known functions of these brain regions in self-referential thinking and autobiographical memory retrieval, the authors hypothesized that the reality-fictional distinction may be mediated by the relevance to self and this difference in self-relevance leads to differential engagement of these brain areas. I quote form the paper:
In the first attempt to tackle this issue using functional magnetic resonance imaging (fMRI), we aimed to uncover which brain regions were preferentially engaged when processing either real or fictional scenarios . The findings demonstrated that processing contexts containing real people (e.g., George Bush) compared to contexts containing fictional characters (e.g., Cinderella) led to activations in the anterior medial prefrontal cortex (amPFC) and the posterior cingulate cortex (PCC).
These findings were intriguing for two reasons. First, the identified brain areas have been previously implicated in self-referential thinking and autobiographical memory retrieval. This suggested that information about real people, in contrast to fictional characters, may be coded in a manner that leads to the triggering of automatic self-referential and autobiographical processing. This led to the hypothesis that information about real people may be coded in more personally relevant terms than that of fictional characters. We do, after all, occupy a common social world and have a wider range of associations in relation to famous people. These may be spontaneously triggered and processed further when reading about them. A logical extension of this premise would be that explicitly self-relevant information should therefore elicit such processing to an even greater extent.
To study the above hypothesis they used an experimental study that used behavioral measures like reaction time, correctness and perceived difficulty of judging propositions involving fictional, famous and close entities. Meanwhile they also measured , using fMRI, the differential recruitment of brain areas as the subjects performed under the different entity conditions. The experimental design is best summarized by having a look at the below figure.
What they found was that for the control condition and the fictional condition the reaction time , correctness and perceived difficulty associated with the proposition was signifciantkly different (lower RT, lower correctness and more perceived difficulty) than for the famous and friend entities condition. Thus, from the behavioral data is was apparent that real characters were judged faster , accurately and more easily than fictional characters. The FMRI data showed that , as hypothesiszed, amPFC and PCC were recruited significantly more in personal relevance contexts and showed a gradient in the expected direction. The below figure should summariz the findings:
In particular, in line with our predictions, regions in and near the amPFC (including the ventral mPFC) and PCC (including the retrosplenial cortex) were modulated by the degree of personal relevance associated with the presented entities. These regions were most strongly engaged when processing high personal relevance contexts (friend-real), secondarily for medium relevance contexts (famous-real) and least of all in the low personal relevance contexts (fiction) (high relevance>medium relevance>low relevance).
The amPFC and PCC regions are known to be commonly engaged during autobiographical and episodic memory retrieval as well as during self-referential processing. Regarding their specific roles, there is evidence indicating that amPFC is comparatively more selective for self-referential processing whereas the PCC/RSC is more selective for episodic memory retrieval . The results of the present study contribute to the understanding of processes implemented in these regions by showing that the demands on autobiographical retrieval processes and self-referential mentation are affected by the degree of personal relevance associated with a processed scenario. It should additionally be noted that the extension of the activations in anterior and ventral PFC regions into subgenual cingulate areas indicates that the degree of personal relevance also modulated responsiveness in affective or emotional regions of the brain .
Here is what the authors have to say about the wider ramifications:
That core regions of the brain’s default network are spontaneously modulated by the degree of stimulus-associated personal relevance is a consequential finding for two reasons. Firstly, the findings suggest that one of the factors that guide our implicit knowledge of what is real and unreal is the degree of coded personal relevance associated with a particular entity/character representation.
What this might translate to at a phenomenological level is that a real person feels more “real” to us than a fictional character because we automatically have access to far more comprehensive and multi-flavored conceptual knowledge in relation to the real people than fictional characters. This would also explain why a real person we know personally (a friend) feels more real to us than a real person who we do not know personally (George Bush).
I would say that there are other broader implications. First it is important to note that phenomenologically, Schizophrenia/psychosis is charachterized by an inability to distinguish reality from fiction. What is fictious also starts seeming real. A putative mechanism of why even fictional things start assuming ‘real’ dimensions may be the attribution of personal relevance or significance to those fictional entities. If something, even though fictional in nature, become highly personally relevant, then it would be easier to treat it as real. What ties things together is the fact that the default brain network is indeed overactive in the schizophrenics. If the PCC and amPFC are hyperactive, no wonder even fictional entities would be attributed personal relevance and incorporated into reality. I had earlier too discussed the delusions of reference with respect to default network hyperactivity in shizophrenics and this can be easily extended to now account for the loss of contact with reality , with the relevance and reality linkage in place. when everything is self relevant everything is real.
As always I am excited and would like some experiments done with schizophrnics/scizotypals using the same experimental paradigm and finding whether there is significant differences in the behavioral measures between controls and subjects and whether that is mediated by differential engagement of the default brain network. In autistics of course I hypothesize the opposite effects.
Needless to say I am grateful to Neuronarrative for reporting on this and helping me make one more puzzle piece fit in place.
Abraham, A., & von Cramon, D. (2009). Reality?=?Relevance? Insights from Spontaneous Modulations of the Brain’s Default Network when Telling Apart Reality from Fiction PLoS ONE, 4 (3) DOI: 10.1371/journal.pone.0004741