(34 comments, 476 posts)
Sandeep Gautam is a psychology and cognitive neuroscience enthusiast, whose basic grounding is in computer science.
Yahoo Messenger: sandygautam
Posts by sandygautam
I have installed a widget in my sidebar from skribit.com that lets you have a say in what article reviews/topics you would like to see me blog about. I am sure that I have a mature audience and the topic/article reviews requested would be related to my field of expertise, so do go ahead and have your say; I don’t promise that I would be able to write on every suggested topic, but as an experimentation I will try my best. You can suggest a topic like “OCEAN theory of personality” or a peer-reviewed/pre-print article(open access article..or be prepared to send the article to me, if it is behind a subscription firewall). I hope this takes off and we have a true two-way communication. By the way, for this you would have to visit my web page and look at the sidebar or you may try this link if it works.
Well, the cluster goes together. Previous research has found that Low LI and psychosis (schizophrenia) and creativity are related; previous research has also found that psychotic /some types of creative people have more faith in intuition; and this research ties things by showing that Low LI and high faith in intuition are correlated.
The research under question is by Kaufman and in it he explores the dual-process theories of cognition- the popular slow high road of deliberate conscious reasoning and the fast low road of unconscious processing. I would rather have the high road consist of both cognitive and affective factors and similarly the unconscious low road consist of both cognitive and affective factors. Kaufman focuses on the unconscious low road and his factor analysis reveal three factors: Faith in intuition: a meta cognition about ones tendency to use intuition; Holistic intuition: the cognitive factor; and affective intuition: the affective factor. with this in mind let us see what Kaufman’s thesis is:
He first introduces the low road and the high road:
In recent years, dual-process theories of cognition have become increasingly popular in explaining cognitive, personality, and social processes (Evans & Frankish, 2009). Although individual differences in the controlled, deliberate, reflective processes that underlay System 2 are strongly related to psychometric intelligence (Spearman, 1904) and working memory (Conway, Jarrold, Kane, Miyake, & Towse, 2007), few research studies have investigated individual differences in the automatic, associative, nonconscious processes that underlay System 1. Creativity and intelligence researchers might benefit from taking into account dual-process theories of cognition in their models and research, especially when exploring individual differences in nonconscious cognitive processes.
Then he explain LI:
Here I present new data, using a measure of implicit processing called latent inhibition (LI; Lubow, Ingberg-Sachs, Zalstein-Orda, & Gewirtz, 1992). LI reflects the brain’s capacity to screen from current attentional focus stimuli previously tagged as irrelevant (Lubow, 1989). LI is often characterized as a preconscious gating mechanism that automatically inhibits stimuli that have been previously experienced as irrelevant from entering awareness, and those with increased LI show higher levels of this form of inhibition (Peterson, Smith, & Carson, 2002). Variation in LI has been documented across a variety of mammalian species and, at least in other animals, has a known biological basis (Lubow & Gerwirtz, 1995). LI is surely important in people’s everyday lives—if people had to consciously decide at all times what stimuli to ignore, they would quickly become overstimulated.
Indeed, prior research has documented an association between decreased LI and acute-phase schizophrenia (Baruch, Hemsley, & Gray, 1988a, 1988b; Lubow et al., 1992). It is known, however, that schizophrenia is also associated with low executive functioning (Barch, 2005). Recent research has suggested that in highfunctioning individuals (in this case, Harvard students) with high IQs, decreased LI is associated with increased creative achievement (Carson et al., 2003). Therefore, decreased LI may make an individual more likely to perceive and make connections that others do not see and, in combination with high executive functioning, may lead to the highest levels of creative achievement. Indeed, the link between low LI and creativity is part of Eysenck’s (1995) model of creative potential, and Martindale (1999) has argued that a major contributor to creative thought is cognitive disinhibition.
He then relates this to intuition and presents his thesis:
A concept related to LI is intuition. Jung’s (1923/1971, p. 538) original conception of intuition is “perception via the unconscious.” Two of the most widely used measures of individual differences in the tendency to rely on an intuitive information-processing style are Epstein’s Rational- Experiential Inventory (REI; Pacini & Epstein, 1999) and the Myers-Briggs Type Indicator (MBTI) Intuition/Sensation subscale (Myers, McCaulley, Quenk, & Hammer, 1998). Both of these measures have demonstrated correlations with openness to experience (Keller, Bohner, & Erb, 2000; McCrae, 1994; Pacini & Epstein, 1999), a construct that has in turn shown associations with a reduced LI (Peterson & Carson, 2000; Peterson et al., 2002), as well as with divergent thinking (McCrae, 1987) and creative achievement.
The main hypothesis was that intuitive cognitive style is associated with decreased latent inhibition.
He found support for the hypothesis from his data. It seemed people with low LI were high in faith in intuition factor. Here is what he discusses:
The results of the current study suggest that faith in intuition, as assessed by the REI and the MBTI Thinking/Feeling subscale, is associated with decreased LI. Furthermore, a factor consisting of abstract, conceptual, holistic thought is not related to LI. Consistent with Pretz and Totz (2007), exploratory factor analysis revealed a distinction between a factor consisting of REI Experiential and MBTI Thinking/Feeling and a factor consisting of MBTI Intuition/Sensation and REI Rational Favorability. This further supports Epstein’s (1994) theory that the experiential system is directly tied to affect. The finding that MBTI Intuition/Sensation and REI Rational Favorability loaded on the same factor supports the idea that the type of intuition that is being measured by these tasks is affect neutral and more related to abstract, conceptual, holistic thought than to the gut feelings that are part of the Faith in Intuition factor.
Here are the broader implications:
The current study adds to a growing literature on the potential benefits of a decreased LI for creative cognition. Hopefully, with further research on the biological basis of LI, as well as its associated behaviors, including interactions with IQ and working memory, we can develop a more nuanced understanding of creative cognition. There is already promising theoretical progress in this direction.
Peterson et al. (2002) and Peterson and Carson (2000) found a significant relationship between low LI and three personality measures relating to an approach-oriented response and sensation-seeking behavior: openness to experience, psychoticism, and extraversion. Peterson et al. found that a combined measure of openness and extraversion (which was referred to as plasticity) provided a more differentiated prediction of decreased LI.
Peterson et al. (2002) argued that individual differences in a tendency toward exploratory behavior and cognition may be related to the activity of the mesolimbic dopamine system and predispose an individual to perceive even preexposed stimuli as interesting and novel, resulting in low LI. Moreover, under stressful or novel conditions, the dopamine system in these individuals will become more activated and the individual will instigate exploratory behavior. Under such conditions, decreased LI could help the individual by allowing him or her more options for reconsideration and thereby more ways to resolve the incongruity. It could also be disadvantageous in that the stressed individual risks becoming overwhelmed with possibilities. Research has shown that the combination of high IQ and reduced LI predicts creative achievement (Carson et al., 2003). Therefore, the individual predisposed to schizophrenia may suffer from an influx of experiential sensations and possess insufficient executive functioning to cope with the influx, whereas the healthy individual low in LI and open to experience (particularly an openness and faith in his or her gut feelings) may be better able to use the information effectively while not becoming overwhelmed or stressed out by the incongruity of the situation. Clearly, further research will need to investigate these ideas, but an understanding of the biological basis of individual differences in different forms of implicit processing and their relationship to openness to experience and intuition will surely increase our understanding of how certain individuals attain the highest levels of creative accomplishment.
To me this is exciting, the triad of creative/psychotic cognitive style, intuition and Latent Inhibition seem to gel together. the only grip eI have is that the author could also have measured intuition directly by using some insight problems requiring ‘aha’ solutions; maybe that is a project for future!
Kaufman, S. (2009). Faith in intuition is associated with decreased latent inhibition in a sample of high-achieving adolescents. Psychology of Aesthetics, Creativity, and the Arts, 3 (1), 28-34 DOI: 10.1037/a0014822
Chris Patil , of Ouroboros , and Vivian Siegel have an interesting and thought-provoking op-ed in DMM, on the issue of the promise and the not-so-promising actuality of science 2.0.
They are right when they say that they doubt if science 2.0 wold attract more scientists than the currently active science bloggers and the likes; and I share their skepticism. However, while they believe that all the tools for online collaboration are already in place, I on the other hand think we need a more formalized one-stop system for scientists, where all their sharing, networking and collaborating needs are met. It doesn’t really attract me that much if I have to collaborate using FrinedFeed, share using twitter , learn using google reader, disseminate using blogger, or network using acaedmia.org etc. I am sure a scientific virtual water-cooler will soon emerge , but till that time I am skeptical of actual practicing scientists using science 2.0 in their day-to-day life; of course how the current breed of science bloggers use these tools and the kind of successful collaborations they can demonstrate would easily and likely define the way science 2.0 shapes up. Needless to say I am excited to be part of the early adopters and while twitter/ FF have not lived to their promise, the relatively older sibling of blogging , has managed to land me virtual collaborations, where I am discussing research ideas with persons who actually perform experiments (I am by circumstances an armchair scientist). For an example see comments by Kim on my last post on action selection, which has also led to some offline discussion and a possible future collaboration. For me science 2.0 works perfectly because I am not in the competitive business of being the first to publish a paper or to secure tenure etc and thus can put my ‘ideas to the world’ as freely as they come. At the same time, I am more than aware that the apprehensions scientists have over being stolen from are genuine and need more thought and care while designing the science 2.0 tools.
I will now like to quote some passages from the op-ed that I liked the most.
Suppose that your unique combination of training and expertise leads you to ask a novel question that you are not currently able to address. You advertise your idea to the world, seeking others who might be able to help. You find that Miranda has an idle machine, built for another purpose, that could be modified just so to help answer your question, if only she had a few samples from an appropriate patient. Hugo, busy with clinical responsibilities, has no time, but has a freezer full of biopsy tissues from such patients. Steve has the time and inclination to modify Miranda’s machine and to write the scripts to drive the analysis. Polly watches the whole process to make sure that the study has sufficient statistical power. Correspondence among the interested parties could be recorded in a publicly available forum, along with data and analysis as they emerge – allowing the entire scientific world to look on and to offer advice on the framing of the question, the design of the machine, the processing of the samples and the interpretation of the results.
In other words, what if you could think a thought at the world and have the world think back? What if everyone in the world were in your lab – a ‘hive mind’ of sorts, but composed of countless creative intellects rather than mindless worker ants, and one in which resources, reagents and effort could be shared, along with ideas, in a manner not dictated by institutional and geographical constraints?
What if, in the process, you could do actual scientific research? Granted, it would be research for which no one person (or group) could take credit, but research all the same. Progress might even occur more rapidly than it does in our world, where new knowledge is shared in the form of highly refined distillates of years of work.
I fit perfectly the person who can ask novel questions, experimental suggestions, but lacks expertise / time/ resources/ sanctity to run them. to me this hive mind would be god-send. If only, it could take off!! But then they provide a reality check:
Beyond raising concerns about the philosophy of communication, our utopian fantasy ignores important aspects of human nature. In any real world, finding collaborators would require a great deal more than shooting questions into the void and cocking an ear for the echo. In particular, in order to find a colleague with exactly the right complement of skills, interest and dependability, we need not only openness but trust. Within a laboratory group (at least, in a functional one), trust is part and parcel of lab citizenship; we and our colleagues voluntarily suspend our competitive urges in order to create a cooperative (and mutually beneficial) environment. In the wider world, however, the presumption is reversed: we tend to be cagey and suspicious in our interactions with other scientists. When we step outside the laboratory door, we transform from Musketeers (‘All for one…!’) to Mulder and Scully (‘Trust no one.’).
Oh , how I hate them to have burst my fantasy bubble by providing this reality check!! But thankfully not being bound to any laboratory I am at least immune form this cooperate or compete dilemma. I just hope there are more people like me (or enuff foolish scientists not really bothered about plagiarism) to reach a critical mass and snowball science 2.0. and then they touch on some subtle aspects of the above:
Another clash between utopia and human nature occurs at the level of publicly sharing preliminary data. In particular, during the period of transition between the status quo and the glorious future, openness may be provably irrational from a game-theoretical standpoint. If I share my data but my competitors do not, I’ve laid all of my cards out on the table, whereas others play theirs close to the vest – a bad bet under any circumstances. At best, my openness allows my adversaries to strategize; at worst, it allows them to steal my ideas. Perhaps the term ‘stealing’ is too harsh: in the words of our estimable thesis advisor, Peter Walter, ‘you can’t unthink a thought.’ Once an idea is in the field, can anyone be blamed for reacting to it in a way that is personally optimal? We already live with this moral conundrum every time we agree to review papers and need to balance the expectation of confidentiality with our own desire to shape our own future plans on the basis of the best and most current information. Radical sharing will require ways for individuals to protect themselves from the occasionally deleterious consequences of rational self-interest.
Perhaps most importantly from a practical perspective: information doesn’t share itself. From establishing an open record of preliminary discussions to freely disseminating experimental results, each step in the process requires an infrastructure. A framework, composed of software and web tools, is necessary in order to empower individual scientists to share information without each of them having to write the enabling code from scratch.
The weakest part of the article in my opinion, is when they argue that the tools are already available. I beleive we are still in the early stages of experimenting; new concepts and sites like biomedexperts need to be experimented with and I am sure we will soon be there. The authors suggest several sites where scientists in science 2.0 purportedly hang and then they point to reasons why that model has not succeeded yet:
Social networking tools also suffer from a variant of the ‘no one will go there until everyone goes there’ problem – the ‘me too’ dilution factor. Just as in the social/job space (Facebook, LinkedIn, MySpace, Bebo), there are myriad networks to choose from and many are too similar to distinguish. To a new user with limited time, it’s not obvious whether to try and join multiple networks, arbitrarily choose one, or wait for a clear winner to emerge.
Here’s praying that a clear victor emerges soon!
Patil, C., & Siegel, V. (2009). This revolution will be digitized: online tools for radical collaboration Disease Models and Mechanisms, 2 (5-6), 201-205 DOI: 10.1242/dmm.003285
I have recently blogged a bit about action-selection and operant learning, emphasizing that the action one chooses, out of many possible, is driven by maximizing the utility function associated with the set of possible actions, so perhaps a quick read of last few posts would help appreciate where I come from .
To recap, whenever an organism makes a decision to indulge in an act (an operant behavior), there are many possible actions from which it has to choose the most appropriate one. Each action leads to a possibly different Outcome and the organism may value the outcomes differentially. this valuation may be both objective (how the organism actually ‘likes’ the outcome once it happens, or it may be subjective and based on how keenly the organism ‘wants’ the outcome to happen independent on whether the outcome is pleasurable or not. Also, it is never guaranteed that the action would produce the desired/expected outcome. There is always some probability associated that the act may or may not result in the expected outcome. Also, on a macro level the organism may lack sufficient energy required to indulge in the act or to carry it out successfully to completion. Mathematically, with each action one can associate a utility U= E x V (where U is utility of act; E is expectancy as to whether one would be able to carry the act and if so whether the act would result in desired outcome; and V is the Value (both subjective and objective0 that one has assigned to the outcome. The problem of action-selection then is simply to maximize the utility given different acts n and to choose the action with maximum utility.
Today I had an epiphany; doesn’t the same logic apply to allocating attention to the various stimuli that bombard us. Assuming a spotlight view of attention, and assuming that there are limited attentional resources, one is constantly faced with the problem of finding which stimuli in the world are salient and need to be attended to. Now, the leap I am making is that attention-allocation just like choosing to act volitionally is an operant and not a reactive, but pro-active process. It may be unconscious, but still it involves volition and ‘choosing’. Remember, that even acts can be reactive and thus there is room for reactive attention; but what I am proposing is that the majority of attention is pro-active- actively choosing between stimuli and focusing on one to try and better predict the world. We are basically prediction machines that want to predict beforehand the state of the world that is most relevant to us and this we do by classical or pavlovian conditioning. We try to associate stimuli (CS) with stimuli(UCS) or response (UCR) and thus try to ascertain what state of world at time T would be given that stimulus (CS) has happened. Apart from prediction machines we are also Agents that try to maximize rewards and minimize punishments by acting on this knowledge and acting and interacting with the world. There are thousands of actions we can indulge in- but we choose wisely; there are thousands of stimuli in the external world, but we attend to salient features wisely.
Let me elaborate on the analogy. While selecting an action we maximize reward and minimize punishment, basically we choose the maximal utility function; while choosing which stimuli to attend to we maximize our foreknowledge of the world and minimize surprises, basically we choose the maximal predictability function; we can even write an equivalent mathematical formula: Predictability P = E x R where P is the increase in predictability due to attending to stimulus 1 ; E is probability that stimulus 1 correctly leads to prediction of stimulus 2; and R is the Relevance of stimulus 2(information) to us. Thus the stimulus one would attend, is the one that leads to maximum gain in predictability. Also, similar to the general energy level of organism that would bias as to whether, and how much, the organism acts or not; there is a general arousal level of the organism that biases whether and how much it would attend to stimuli.
So, what new insights do we gain from this formulation? First insight we may gain is by elaborating the analogy further. We know that basal ganglia in particular and dopamine in general is involved in action-selection. Dopamine is also heavily involved in operant learning. We can predict that dopamine systems , and the same underlying mechanisms, may also be used for attention-allocation. Dopamine may also be heavily involved in classical learning as well. Moreover, the basic computations and circuitry involved in allocating attention should be similar to the one involved in action-selection. Both disciplines can learn from each other and utilize methods developed in one field for understanding and elaborating phenomenon in the other filed. For eg; we know that dopamine while coding for reward-error/ incentive salience also codes for novelty and is heavily involved in novelty detection. Is the novelty detection driven by the need to avoid surprises, especially while allocating attention to a novel stimulus.
What are some of the prediction we can make form this model: just like the abundant literature on U= E x V in decision making and action selection literature, we should be able to show the independent and interacting effects of Expectancy and Relevance on attention-grabbing properties of stimulus. The relevance of different stimuli can be manipulated by pairing them with UCR/UCS that has different degrees of relevance. The expectancy can be differentially manipulated by the strength of conditioning; more trials would mean that the association between the CS and UCS is strong; also the level of arousal may bias the ability to attend to stimuli. I am sure that there is much to learn in attention research from the research on decision-making and action-selection and the reverse would also be true. It may even be that attention-allocation is actually conceptualized in the above terms; if so I plead ignorance of knowledge of this sub-field and would love to get a few pointers so that I can refine my thinking and framework.
Also consider the fact that there is already some literature implicating dopamine in attention and the fact that dopamine dysfunction in schizophrenia, ADHD etc has cognitive and attentional implications is an indication in itself. Also, the contextual salience of drug-related cues may be a powerful effect of dapomine based classical conditioning and attention allocation hijacking the normal dopamine pathways in addicted individuals.
Lastly, I got set on this direction while reading an article on chaining of actions to get desired outcomes and how two different brain systems ( a cognitive (Prefrontal) high road one based on model-based reinforcement learning and a unconscious low road one (dorsolateral striatal) based on model-free reinforcement learning)may be involved in deciding which action to choose and select. I believe that the same conundrum would present itself when one turns attention to the attention allocation problem, where stimuli are chained together and predict each other in succession); I would predict that there would be two roads involved here too! but that is matter for a future post. for now, would love some honest feedback on what value, if any, this new conceptualization adds to what we already know about attention allocation.
Daniel Nettle, writes an article in Journal Of Theoretical Biology about the evolution of low mood states. Before I get to his central thesis, let us review what he reviews:
Low mood describes a temporary emotional and physiological state in humans, typically characterised by fatigue, loss of motivation and interest, anhedonia (loss of pleasure in previously pleasurable activities), pessimism about future actions, locomotor retardation, and other symptoms such as crying.
This paper focuses on a central triad of symptoms which are common across many types of low mood, namely anhedonia, fatigue and pessimism. Theorists have argued that, whereas their opposites facilitate novel and risky behavioural projects. These symptoms function to reduce risk-taking. They do this, proximately, by making the potential payoffs seem insufficiently rewarding (anhedonia), the energy required seem too great (fatigue), or the probability of success seem insufficiently high (pessimism). An evolutionary hypothesis for why low mood has these features, then, is that is adaptive to avoid risky behaviours when one is in a relatively poor current state, since one would not be able to bear the costs of unsuccessful risky endeavors at such times .
I would like to pause here and note how he has beautifully summed up the low mood symptoms and key features; taking liberty to define using my own framework of Value X Expectancy and distinction between cognitive(‘wanting’) and behavioral (‘liking’) side of things :
- Anhedonia: behavioral inability to feel rewarded by previously pleasurable activities. Loss of ‘liking’ following the act. Less behavioral Value assigned.
- Loss of motivation and interest: cognitive inability to look forward to or value previously desired activities. Loss of ‘wanting’ prior to the act. Less cognitive Value assigned.
- Fatigue: behavioral inability to feel that one can achieve the desired outcome due to feelings that one does not have sufficient energy to carry the act to success. Less behavioral Expectancy assigned.
- Pessimism: cognitive inability to look forward to or expect good things about the future or that good outcomes are possible. Less cognitive Expectancy assigned.
The reverse conglomeration is found in high mood- High wanting and liking, high energy and outlook. Thus, I agree with Nettle fully that low mood and high mood are defined by these opposed features and also that these features of low and high mood are powerful proximate mechanisms that determine the risk proneness of the individual: by subjectively manipulating the Value and Expectancy associated with an outcome, the high and low mood mediate the risk proneness that an organism would display while assigning a utility to the action. Thus, it is fairly settled: if ultimate goal is to increase risk-prone behavior than the organism should use the proximate mechanism of high mood; if the ultimate goal is to avoid risky behavior, then the organism should display low mood which would proximately help it avoid risky behavior.
Now let me talk about Nettle’s central thesis. It has been previously proposed in literature that low mood (and thus risk-aversion) is due to being in a poor state wherein one can avoid energy expenditure (and thus worsening of situation) by assuming a low profile. Nettle plays the devil’s advocate and argues that an exactly opposite argument can be made that the organism in a poor state needs to indulge in high risk (and high energy) activities to get out of the poor state. Thus, there is no a prior reason as to why one explanation may be more sound than the other. To find out when exactly high risk behavior pay off and when exactly low risk behaviors are more optimal, he develops a model and uses some elementary mathematics to derive some conclusions. He, of course , bases his model on a Preventive focus, whereby the organism tries to minimize getting in a state R , which is sub-threshold. He allows the S(t) to be maximized under the constraint that one does not lose sight of R. I’ll not go into the mathematics, but the results are simple. When there is a lot of difference between R (dreaded state) and S (current state), then the organism adopts a risky behavioral profile. when the R and S are close, he maintains low risk behavior, however when he is in dire circumstances (R and S are very close) then risk proneness again rises to dramatic levels. To quote:
The model predicts that individuals in a good state will be prepared to take relatively large risks, but as their state deteriorates, the maximum riskiness of behaviour that they will choose declines until they become highly risk-averse. However, when their state becomes dire, there is a predicted abrupt shift towards being totally risk-prone. The switch to risk-proneness at the dire end of the state continuum is akin to that found near the point of starvation in the original optimal foraging model from which the current one is derived (Stephens, 1981). The graded shift towards greater preferred risk with improving state is novel to this model, and stems from the stipulation that if the probability of falling into the danger zone in the next time step is minimal, then the potential gain in S at the next time step should be maximised. However, a somewhat similar pattern of risk proneness in a very poor state, risk aversion in an intermediate state, and some risk proneness in a better state, is seen in an optimal-foraging model where the organism has not just to avoid the threshold of starvation, but also to try to attain the threshold of reproduction (McNamara et al., 1991). Thus, the qualitative pattern of results may emerge quite generally from models using different assumptions.
Nettle, then extrapolates the clinical significance from this by proposing that ‘agitated’ / ‘excited’ depression can be explained as when the organism is in dire straits and has thus become risk-prone. He also uses a similar logic for dysphoric mania although I don’t buy that. However, I agree that euphoric mania may just be the other extreme of high mood and more risk proneness and goal achievements; while depression the normal extreme of low mood and adverse circumstances and risk aversion. To me this model ties up certain things we know about life circumstances and the risk profile and mood tone of people and contributes to deepening our understanding.
Nettle, D. (2009). An evolutionary model of low mood states Journal of Theoretical Biology, 257 (1), 100-103 DOI: 10.1016/j.jtbi.2008.10.033