I know that the computer metaphor does not do justice to the brain, but can we conceivably come up with a universal algorithm in how the brain processes stimuli and reacts/responds to them? Further, can we then tie up those algorithmic sub-modules to actual neural subsystems/structures and neurotransmitter systems as substantiated in the physical brain?
That is what I intend to do today, but first let us list our very basic algorithm of how the brain processes stimuli and responds to it. Consider it like a flowchart with each step there being made a decision. At each step that is numbered 1, nothing further happens; at each step numbered 2, further 2 choices are available.
Now, let me unpack this a bit. The first step for the purposes of this post is an incoming stimulus. When the stimuli comes we (the brain) can be in different levels of alertness and lookout for incoming stimuli; thus the brain may miss or detect the stimuli. We may be in neuro-vegetative states like sleep and feeding and may be relaxing and miss on both threatening as well rewarding stimuli. Or we can be in a vigilant mode either on lookout for danger or say alert while ready to pounce on prey. A Vigilance system can be reliably conjectured to underlie this and indeed Locus Coerelus- Norepinephrine (LC-NE) system may just be exactly that system that makes us alert and inhibits neuro vegetative states. Another brain structure relevant here is Amygdala which is popularly known for its role in detecting threatening stimuli, but is involved in pleasant stimuli detection too. Hypersensitiveness of this system can conceivably lead to anxiety at one end (constant lookout for trouble) and addiction (constant lookout for possible gains) at the other. One can also extend this line of reasoning and posit that differential sensitivity of this system may underlie the personality trait of Neuroticism.
Once you have noticed or attended to a stimuli what next? Not every stimuli is salient or important. The next step for the brain is to identify whether the stimuli is indeed important from a functional point of view- whether it is an indicator of, or an actual, reward or punishment. Here comes the incentive salience function of Dopamine. Dopamine neurons in say NucleusAccumbens (NAcc) area code for whether the incoming stimuli is important or not (see work of Berridge et al) ; if its not important nothing needs to be done; however if it is important and consequential than an appropriate response needs to be executed. Activity has to ensue. Please note that though NAcc is typically thought of as part of a reward circuit, it is equally involved in determining salience of an aversive stimuli. Hypersensitiveness of this incentive salience system can conceivably lead to depression at one end (where all stimuli are important , but negatively toned or aversive) and mania at the other end (where all stimuli are important, but perceived as positively valenced). One can also extend this line of reasoning and associate differential sensitivity in this system to trait of Extraversion.
Once you have determined that the stimuli is important and needs responding, how do you determine the right response? One effortless and ‘hot’ way is to use the default response – if someone threatens you, punch them in the face! The other, more effortful, and ‘cold’ way is to choose a response from the response sets that have been activated or by overriding the default response and selecting something better. This is the self –regulation system. As a brain region, I’m sure ACC has a major role to play here- detecting conflicts between responses and also inhibiting dominant default response. In terms of neurotransmitters I see a role for Serotonin here – regulating the response, especially emotional and instinctual response. Hypersensitiveness of this system may lead to obsessions (rigid thinking) and compulsions (rigid acting) and differential sensitivity in the system may be associated with Conscientiousness.
Now, that you/ your brain has chosen the most appropriate response, one further step needs to be executed before you actually execute the action. Many readers of this blog will be familiar with the Value -Expectancy model of motivation: Value was coded by dopamine neurons using incentive salience, what about expectancy? Basically the V-E model posits that an action will be taken only if you value the outcome and are reasonably sure that you can act in such a way as to achieve the outcome. Neurons in PFC may conceivably code for outcome prediction. PFC is important to predict whether a particular course of action will lead to desired results. It is also conceivable that dopamine neurons may play an important role here. The basic idea is to predict whether you can execute the response and receive the reward or avoid the punishment and only then if the action is feasible, then execute the action. This outcome prediction module I think recruits PFC to a large extent. Hypersensitivity of this system may lead to ADHD and differential sensitivity associated with Openness to experience.
To me the above looks very neat and logical and elegant and I would love your comments regarding the same and also any contradictions you see in literature or any additional thoughts you may have.
“Between stimulus and response there is a space. In that space is our power to choose our response. In our response lies our growth and our freedom.” Victor E Frankl
Schematic of an idealized analytical instrument. (Photo credit: Wikipedia)
In today’s post I will be drawing heavily from the spiritual traditions of India (Yoga etc), and interested readers are redirected to these excellentsources for more information about the same.
As per the spiritual tradition of India, Mind (or Antahkaran) is made up of four functions or parts. These are Manas, Chitta, Ahamkar and Buddhi. These are typically translated as sensory-motor mind, memory bank, ego and intellect respectively. As an interesting aside, Buddha derives from the common root of Buddhi (budh- to know) and stands for the enlightened one.
Here is a brief description of the four functions:
Manas is ordinary, indeterminate thinking — just being aware that something is there and automatically registers the facts which the senses perceive.
The subconscious action, memory, etc., is caused by chitta. The function of chitta is chinta (contemplation), the faculty whereby the Mind in its widest sense raises for itself the subject of its thought and dwells thereon.
Buddhi determines, decides and logically comes to a conclusion that something is such-and-such a thing. That is another aspect of the operation of the psyche — buddhi or intellect. buddhi, on attending to such registration, discriminates, determines, and cognizes the object registered, which is set over and against the subjective self by aha?k?ra.
Ahamkara — ego, affirmation, assertion, ‘I know’. “I know that there is some object in front of me, and I also know that I know. I know that I am existing as this so-and-so.” This kind of affirmation attributed to one’s own individuality is the work of ahamkara, known as egoism.
There is also a hierarchical relation between these with Buddhi at the top and Manas at the bottom. Now, lets look at each of these more closely.
Manas, or sensory-motor mind, is not just registering stimulus but also responsible for executing actions and may be equated with the sensory/ motor cortical functions of the brain. It controls the 10 Indriyas (5 senses and 5 action-oriented faculties). Its important to note that Manas is doing both the functions associated with stimulus as well as the response, though its the first one when it comes to stimulus processing (registering the stimulus) and the last one when it comes to executing responses/actions ( it blindly executes the action that has been decided / chosen upstream). Of course one could just have a reflex action where a stimulus leads to response, but in majority of human action, there is a space between the two. That space is provided by the rest of the mind functions.
Chitta, or memory-prospecting mind, may be typically equated with the association cortex part pf the brain. Many refer to chitta as the memory or impressions bank, but forget to mention the future oriented part of it. Here is a quote:
The part of the Mind thinking and visualizing the objects, events and experiences from the past or the future (emphasis mine) is called the Chitta and this act is called Chintan.
Its thus evident that Chitta drives Manas not only based on past memories, but also based on future expectations or predictions. From brain studies , we know that the same part of the brain is used for memory as well as prospection. Chitta using past memories to drive manas (and thus behavior or motivated cognition) I view as being conditioned by classical conditioning processes. Chitaa using future expectations/ predictions to drive behavior and motivated cognition, I view as being conditioned by operant conditioning processes. In many philosophical and spiritual traditions, one of the aims is to get over (social) conditioning. Chitta hinders spiritual awakening by using habits, which is an integral pat of chitta function. The habits are nothing but the conditioning, but again one in stimulus path and the other in response/action path.
Ahamkara, or experiential-agentic self, may be typically equated with consciousness/ conscious and ego-driven self. It knows and say ‘I am’ Conscious entities typically have two functions- experience and agency. It is something it is to be like that conscious entity (experience) and the entity has volition or ability to do things (agency). The concept of self as a conscious entity that has experience (in the stimulus path) and agency (in the response/ action path) is important for this notion of ahamkara. With self comes concepts like real self and ideal self which drive and are driven by experience and agency respectively. The less the discrepancy between the two the better your spiritual growth. An interesting concept here is that of coloring or external decorations- your coloring or how you see your self do lead to downward impact on chitta and manas by contaminating the stimulus/ action.
Buddhi, or knowing-deciding mind, is the final frontier on your path to spirituality. The typical functions associated with Buddhi are knowing, discriminating, judging and deciding. I think knowing/ discriminating (between stimuli/ actions etc) is a stimulus path function, while judging/ deciding (between actions/ responses/ attending to a stimuli) is a response path function. However I also believe they converge to a great extent here or else we will have a problem of turtles all the way down. Once you start to see things as they are, you are also able to choose wisely. At least that is what the scriptures say and what Boddhisattvas aspire or achieve.
To me this increasingly fine-grained control of what we perceive and how we act , from the gross actions and perceptions of manas to the discriminating decisions of buddhi are very intuitively appealing and also appear to be grounded in psychological and neural processes.
Mindfulness (Buddhism based) has become all the rage nowadays, yet if we look at the spiritual traditions of India, perhaps while Yoga defined as Chitta vritti nirodaha (or “Yoga is the silencing of the modifications of the mind”) does refer to being in the present (here-and-now) and not to be disturbed by the perturbations of chitta (memories of past or expectations of future), one also needs to go beyond just Chitta vritti, to addressing the Ahamkara coloring and finally to try achieving the Buddha nature where there is little disparity in doing and being. (Mindfulness) Meditation needs to move beyond being curious, non-judgemental and in the present to where one doe shave a judgement function, but one that is perfectly attuned.
Today I plan to touch upon the topic of consciousness (from which many bloggers shy) and more broadly try to delineate what I believe are the important different conscious and unconscious processes in the brain. I will be heavily using my evolutionary stages model for this.
To clarify myself at the very start , I do not believe in a purely reactive nature of organisms; I believe that apart from reacting to stimuli/world; they also act , on their own, and are thus agents. To elaborate, I believe that neuronal groups and circuits may fire on their own and thus lead to behavior/ action. I do not claim that this firing is under voluntary/ volitional control- it may be random- the important point to note is that there is spontaneous motion.
Sensory system: So to start with I propose that the first function/process the brain needs to develop is to sense its surroundings. This is to avoid predators/ harm in general. this sensory function of brain/sense organs may be unconscious and need not become conscious- as long as an animal can sense danger, even though it may not be aware of the danger, it can take appropriate action – a simple ‘action’ being changing its color to merge with background.
Motor system:The second function/ process that the brain needs to develop is to have a system that enables motion/movement. This is primarily to explore its environment for food /nutrients. Preys are not going to walk in to your mouth; you have to move around and locate them. Again , this movement need not be volitional/conscious – as long as the animal moves randomly and sporadically to explore new environments, it can ‘see’ new things and eat a few. Again this ‘seeing’ may be as simple as sensing the chemical gradient in a new environmental.
Learning system: The third function/process that the brain needs to develop is to have a system that enables learning. It is not enough to sense the environmental here-and-now. One needs to learn the contingencies in the world and remember that both in space and time. I am inclined to believe that this is primarily pavlovaion conditioning and associative learning, though I don’t rule out operant learning. Again this learning need not be conscious- one need not explicitly refer to a memory to utilize it- unconscious learning and memory of events can suffice and can drive interactions. I also believe that need for this function is primarily driven by the fact that one interacts with similar environments/con specifics/ predators/ preys and it helps to remember which environmental conditions/operant actions lead to what outcomes. This learning could be as simple as stimuli A predict stimuli B and/or that action C predicts reward D .
Affective/ Action tendencies system .The fourth function I propose that the brain needs to develop is a system to control its motor system/ behavior by making it more in sync with its internal state. This I propose is done by a group of neurons monitoring the activity of other neurons/visceral organs and thus becoming aware (in a non-conscious sense)of the global state of the organism and of the probability that a particular neuronal group will fire in future and by thus becoming aware of the global state of the organism , by their outputs they may be able to enable one group to fire while inhibiting other groups from firing. To clarify by way of example, some neuronal groups may be responsible for movement. Another neuronal group may be receiving inputs from these as well as say input from gut that says that no movement has happened for a time and that the organism has also not eaten for a time and thus is in a ‘hungry’ state. This may prompt these neurons to fire in such a way that they send excitatory outputs to the movement related neurons and thus biasing them towards firing and thus increasing the probability that a motion will take place and perhaps the organism by indulging in exploratory behavior may be able to satisfy hunger. Of course they will inhibit other neuronal groups from firing and will themselves stop firing when appropriate motion takes place/ a prey is eaten. Again nothing of this has to be conscious- the state of the organism (like hunger) can be discerned unconsciously and the action-tendencies biasing foraging behavior also activated unconsciously- as long as the organism prefers certain behaviors over others depending on its internal state , everything works perfectly. I propose that (unconscious) affective (emotional) state and systems have emerged to fulfill exactly this need of being able to differentially activate different action-tendencies suited to the needs of the organism. I also stick my neck out and claim that the activation of a particular emotion/affective system biases our sensing also. If the organism is hungry, the food tastes (is unconsciously more vivid) better and vice versa. thus affects not only are action-tendencies , but are also, to an extent, sensing-tendencies.
Decisional/evaluative system: the last function (for now- remember I adhere to eight stage theories- and we have just seen five brain processes in increasing hierarchy) that the brain needs to have is a system to decide / evaluate. Learning lets us predict our world as well as the consequences of our actions. Affective systems provide us some control over our behavior and over our environment- but are automatically activated by the state we are in. Something needs to make these come together such that the competition between actions triggered due to the state we are in (affective action-tendencies) and the actions that may be beneficial given the learning associated with the current stimuli/ state of the world are resolved satisfactorily. One has to balance the action and reaction ratio and the subjective versus objective interpretation/ sensation of environment. The decisional/evaluative system , I propose, does this by associating values with different external event outcomes and different internal state outcomes and by resolving the trade off between the two. This again need not be conscious- given a stimuli predicting a predator in vicinity, and the internal state of the organism as hungry, the organism may have attached more value to ‘avoid being eaten’ than to ‘finding prey’ and thus may not move, but camouflage. On the other hand , if the organisms value system is such that it prefers a hero’s death on battlefield , rather than starvation, it may move (in search of food) – again this could exist in the simplest of unicellular organisms.
Of course all of these brain processes could (and in humans indeed do) have their conscious counterparts like Perception, Volition,episodic Memory, Feelings and Deliberation/thought. That is a different story for a new blog post!
And of course one can also conceive the above in pure reductionist form as a chain below:
sense–>recognize & learn–>evaluate options and decide–>emote and activate action tendencies->execute and move.
and then one can also say that movement leads to new sensation and the above is not a chain , but a part of cycle; all that is valid, but I would sincerely request my readers to consider the possibility of spontaneous and self-driven behavior as separate from reactive motor behavior.
Till now, most of the research on learning at the molecular level or LTP/TLD has focused on classical conditioning paradigms. To my knowledge for the first time someone has started looking at whether , on the molecular level, classical conditioning , which works by associations between external stimuli, is differently encoded and implemented from operant learning , which depends on learning the reward contingencies of one’s spontaneously generated behavior.
Bjorn Brembs and colleagues have shown that the normal learning pathway implicated in classical conditioning, which involves Rugbata gene in fruit fly and works on adenylyl cyclase (AC) , is not involved in pure operant learning; rather pure operant learning is mediated by Protein Kinase C (PKC) pathways. This is not only a path breaking discovery , as it cleary shows the double dissociation showing genetically mutant flies, it is also a marvelous example fo how a beautiful experimental setup was convened to separate and remove the classical conditioning effects from normal operant learning and generate a pure operant learning procedure. You can read more about the procedure on Bjorn Brembs site and he also maintains a very good blog, so check that out too.
Here is the abstract of the article and the full article is available at the Bjorn Brembs site.
Learning about relationships between stimuli (i.e., classical conditioning ) and learning about consequences of one’s own behavior (i.e., operant conditioning ) constitute the major part of our predictive understanding of the world. Since these forms of learning were recognized as two separate types 80 years ago , a recurrent concern has been the issue of whether one biological process can account for both of them . Today, we know the anatomical structures required for successful learning in several different paradigms, e.g., operant and classical processes can be localized to different brain regions in rodents  and an identified neuron in Aplysia shows opposite biophysical changes after operant and classical training, respectively. We also know to some detail the molecular mechanisms underlying some forms of learning and memory consolidation. However, it is not known whether operant and classical learning can be distinguished at the molecular level. Therefore, we investigated whether genetic manipulations could differentiate between operant and classical learning in dorsophila. We found a double dissociation of protein kinase C and adenylyl cyclase on operant and classical learning. Moreover, the two learning systems interacted hierarchically such that classical predictors were learned preferentially over operant predictors.
Do take a look at the paper and the experimental setup and lets hope that more focus on operant learning would be the focus from now on and would lead to a paradigmatic shift in molecular neuroscience with operant conditioning results more applicable to humans than classical conditioning results, in my opinion.
B BREMBS, W PLENDL (2008). Double Dissociation of PKC and AC Manipulations on Operant and Classical Learning in Drosophila Current Biology, 18 (15), 1168-1171 DOI: 10.1016/j.cub.2008.07.041
Today I wish to discuss C. Robert Cloninger’s theory of temperaments and character traits. It is a psycho biological theory based on genetic and neural substrates and mechanisms and in it he proposes for the existence of four temperament traits and three character traits; thus talking about seven personality traits. First the abstract to give you some idea:
In this study, we describe a psychobiological model of the structure and development of personality that accounts for dimensions of both temperament and character. Previous research has confirmed four dimensions of temperament: novelty seeking, harm avoidance, reward dependence, and persistence, which are independently heritable, manifest early in life, and involve preconceptual biases in perceptual memory and habit formation. For the first time, we describe three dimensions of character that mature in adulthood and influence personal and social effectiveness by insight learning about self-concepts. Self-concepts vary according to the extent to which a person identifies the self as (1) an autonomous individual, (2) an integral part of humanity, and (3) an integral part of the universe as a whole. Each aspect of self-concept corresponds to one of three character dimensions called self-directedness, cooperativeness, and self-transcendence, respectively. We also describe the conceptual background and development of a self-report measure of these dimensions, the Temperament and Character Inventory. Data on 300 individuals from the general population support the reliability and structure of these seven personality dimensions. We discuss the implications for studies of information processing, inheritance, development, diagnosis, and treatment
I’ll list them briefly below (in order )(along with their sub scales/ facets)
I) Novelty seeking (NS)
Exploratory excitability (NS1)
II) Harm avoidance (HA)
Anticipatory worry (HA1)
Fear of uncertainty (HA2)
III) Reward dependence (RD)
Openness to warm communication (RD2)
IV) Persistence (PS)
Eagerness of effort (PS1)
Work hardened (PS2)
V) Self-directedness (SD)
Enlightened second nature (SD5)
VI) Cooperativeness (C)
Social acceptance (C1)
Pure-hearted conscience (C5)
VII) Self-transcendence (ST)
Transpersonal identification (ST2)
Spiritual acceptance (ST3)
To me this lacks one more trait and I’m sure Cloninger will identify and add one more in the future (he added the three character traits relatively late).
Now for the meat of the post. My thesis is that these are similar to the Big Eight temperaments that I have discussed in my earlier post and follow the same eight fold developmental/evolutionary pattern. Further , I would claim that each facet of a trait follows the same structure. Most traits have 4 or 5 facets and these are typically related to 5 major ways of reacting/ relating to world around us. It is also my thesis that juts as cloninger had tied the initial three traits to behavioral inhibition, behavioral approach and behavioral maintenance and to the three neurotransmitter systems of serotonin, dopamine and norepinephrine respectively; the same line of argument can be extended to other facets and new biogenic amine CNS neurotransmitters pathways correlated with each trait.
Individuals high in HA tend to be cautious, careful,fearful, tense, apprehensive, nervous, timid, doubtful,discouraged, insecure, passive, negativistic, or pessimistic even in situations that do not normally worry other people. These individuals tend to be inhibited and shy in most social situations. Their energy level tends to be low and they feel chronically tired or easily fatigued. As a consequence they need more reassurance and encouragement than most people and are usually sensitive to criticism and punishment. The advantages of of high Harm Avoidance are the greater care and caution in anticipating possible danger, which leads to careful planning when danger is possible. The disadvantages occur when danger is unlikely but still anticipated, such pessimism or inhibition leads to unnecessary worry.
In contrast, individuals with low scores on this temperament dimension tend to be carefree, relaxed, daring, courageous, composed, and optimistic even in situations that worry most people. These individuals are described as outgoing, bold, and confident in most social situations. Their energy level tends to be high, and they impress others as dynamic, lively, and vigorous persons. The advantages of low Harm Avoidance are confidence in the face of danger and uncertainty,leading to optimistic and energetic efforts with little or no distress. The disadvantages are related to unresponsiveness to danger, which can lead to reckless optimism.
Form the above it is clear that this is related to Neurotcisim and. This would also be related to anxiety witnessed in clinical situations and requiring treatment. It is instructive to note that Cloninger proposes Serotonin CNS system as a substrate for this trait and that many anti-anxiety drugs actually target serotonin receptors (SSRIS are the best anti anxiety drugs).also as per the model this is involved in behavior inhibition. Let me elaborate that and propose that what is meant by behavior inhibition is learning to avoid the predator. In operant conditioning paradigms this would be learning due to Positive punishment. Learning to inhibit a pre-potent behavior because of punishments.
Individuals high in Novelty Seeking tend to be quick-tempered, excitable, exploratory, curious, enthusiastic, ardent, easily bored, impulsive, and disorderly The advantages of high Novelty Seeking are enthusiastic and quick engagement with whatever is new and unfamiliar, which leads to exploration of potential rewards. The disadvantages are related to excessive anger and quick disengagement whenever their wishes are frustrated, which leads to inconsistencies in relationships and instability in efforts.
In contrast, individuals low in Novelty Seeking are described as slow tempered, indifferent, uninquisitive, unenthusiastic, umemotional, reflective, thrifty, reserved, tolerant of monotony, systematic, and orderly.
These are classical Impulsiveness related symptoms and can be safely associated with the dopamine system. this trait then is related to conscientiousness and is driven by rewards and reward-related behavior learning. Excess is this trait may result in psychosis and many anti-psychotic drugs act on this dopamine system. This is the traditional behavioral activation system. In operant conditioning terms we can call this learning under positive reinforcement. New behaviors are learned or strength of old behaviors is modified (increased) in the presence of primary reinforces like food, sex,(even money) etc).
Individuals who score high in Reward Dependence tend to be tender-hearted, loving and warm, sensitive, dedicated, dependent, and sociable. They seek social contact and are open to communication with other people. Typically, they find people they like everywhere they go. A major advantage of high Reward Dependence is the sensitivity to social cues, which facilitates warm social relations and understanding of others’ feelings. A major disadvantage of high Reward Dependence involves the ease with which other people can influence the dependent person’s views and feelings, possibly leading to loss of objectivity.
Individuals low on the Reward Dependence are often described as practical, tough minded, cold, and socially insensitive. They are content to be alone and rarely initiate open communication with others. They prefer to keep their distance and typically have difficulties in finding something in common with other people. An advantage of low Reward Dependence is that independence from sentimental considerations.
From the above it is clear that this is related to trait Extraversion or sociability and influences how adept, and prone, one is at forming alliances and friends. This has been hypothesized to be related to norepinephrine system and related to behavioral maintenance. In operant conditioning terms , I interpret it as maintaining a behavior despite no real (primary) reinforcement, but just because of secondary reinforcement (social approval, praise, status etc). This is not necessary maladaptive and secondary reinforcement are necessary; but too much dependence on that may lead to depression. Initial anti-depressants all worked on the norepinepherine system and the monoamine theory of depression is still around. I believe that depression is multi-factorial, but the social striving/approval/negotiation is a prime facet underlying the illness.
Individuals high in Persistence tend to be industrious, hard-working, persistent, and stable despite frustration and fatigue. They typically intensify their effort in response to anticipated reward. They are ready to volunteer when there is something to be done, and are eager to start work on any assigned duty. Persistent persons tend to perceive frustration and fatigue as a personal challenge. They do not give up easily and, in fact, tend to work extra hard when criticized or confronted with mistakes in their work. Highly persistent persons tend to be ambitious overachievers who are willing to make major sacrifices to be a success. A highly persistent individual may tend to be a perfectionist and a workaholic who pushes him/herself far beyond what is necessary to get by.High Persistence is an adaptive behavioral strategy when rewards are intermittent but the contingencies remain stable. However, when the contingencies change rapidly, perseveration becomes maladaptive.
When reward contingencies are stable, individuals low in Persistence are viewed as indolent, inactive, unreliable, unstable and erratic on the basis of both self-reports and interviewer ratings. They rarely intensify their effort even in response to anticipated reward. These persons rarely volunteer for anything they do not have to do, and typically go slow in starting work, even if it is easy to do. They tend to give up easily when faced with frustration, criticism, obstacles, and fatigue. These persons are usually satisfied with their current accomplishments, rarely strive for bigger and better things, and are frequently described as underachievers who could probably accomplish for than they actually do, but do not push themselves harder than it is necessary to get by. Low scorers manifest a low level of perseverance and repetitive behaviors even in response to intermittent reward. Low Persistence is an adaptive strategy when reward contingencies change rapidly and may be maladaptive when rewards are infrequent but occur in the long run.
By some stretch of imagination one can relate this to being empathetic or agreeable. (volunteering etc) and thus to agreeableness. One way this could be related to parental investment is that those who do not care for their kids have children that give up easily and are frustrated easily; thus the same mechanism may lie both parental care behavior and persistent behavior in the kid. This behavior/trait I propose may be related to epinepherine CNS system. This is related to behavior persistence; in opernat conditioning terms this is behavior persistence despite no primary or even secondary reinforcement. Of course extinction will eventually happen in absence of reward, but factors like time/ no. of trials taken to archive extinction may be a factor here. Although, behavior is not reinforced at all still it is persisted with and maybe even different related variations tried to get the desired reward. Stimulants as a class of drug may be acting on this pathway, stimulating individuals to engage in behavior despite no reinforcement.
Highly self-directed persons are described as mature, strong, self-sufficient, responsible, reliable, goaloriented, constructive, and well-integrated individuals when they have the opportunity for personal leadership. They have good self-esteem and self-reliance. The most distinctive characteristics of self-directed individuals is that they are effective, able to adapt their behavior in accord with individualy chosen, voluntary goals. When a self-directed individual is required to follow the orders of others in authority, they may be viewed as rebellious trouble maker because they challenge the goals and values of those in authority.
In contrast, individuals who are low in Self-Directedness are described as immature, weak, fragile, blaming, destructive, ineffective, irresponsible, unreliable, and poorly integrated when they are not conforming to the direction of a mature leader. They are frequently described by clinicians as immature or having a personality disorder. They seem to be lacking an internal organizational principle, which renders them unable to define, set, and pursue meaningful goals. Instead, they experience numerous minor, short term, frequently mutually exclusive motives, none of which can develop to the point of long lasting personal significance and realization.
To me the above looks very much like the Rebelliousness/ conformity facet of Openness or intellect. The core idea being whether one has archived ego-integrity and good habits. I propose that histamine or melatonin may be the mono amine CNS system involved here, though phenuylethylamine(PEA) also seems a good target, so do tyrosine and other trace amines. Whatever be the neurotransmitter system involved, the operant conditioning phenomenon would be learning to engage in behavior despite +ve punishment. thus, the ability to go against the grain, convention, or social expectations and be true to oneself. This behavior can be called learning under -ve reinforcement i.e engaging in a behavior despite there being troubling things around, in the hope that they would be taken away on successful new behavior. I would also relate this to behavioral reportaire of the individual. People high on this trait would show greater behavioral variability during extinction trials and come up with novel and insightful problem solving behaviors.
That is it for now; I hope to back up these claims, and extend this to the rest of the 3 traits too in the near future. Some things I am toying with is either classical conditioning and avoidance learning on these higher levels; or behavior remembering (as opposed to learning) at these higher levels. Also other neurotransmitter systems like gluatamete, glycine, GABA and aspartate may be active at the higher levels. Also neuro peptides too are broadly classified in five groups so they too may have some role here. Keep guessing and do contribute to the theory if you can!!
In the part I, he discusses Stanley Milgram’s compliance experiments wherein under the authority of a professor, subjects were forced to apply outrageous electric shocks to the confederates. This experiment was a classical one in social psychology and showed how under the situations of authority, normal individuals can be made to do evil deeds in the laboratory. Milgram also did a number of variations of this experiment to find out what factors facilitated compliance and which factors enabled resistance to authority.
In Part II, Zimbardo discusses how these laboratory results can be extended to the real world phenomenons like the holocaust/ palestentain suicide bombers/ suicide cults and how most of the perpetrators are very common people (banality of evil).
In part III, Zimbardo outlines 10 learnings from Milgram’s experiments and I find then worth summarizing here –
Compliance can be increased by :
A pseudo-legal contract that binds one to the act (which may not be construed as evil, a priori, but becomes evil while actual execution). also the public declarations of commitment force cognitive dissonance and make people stick to their ‘contracts’.
Meaningful social roles like ‘teacher’ etc given to the perpetrators. They may find solace under the fact that their social role demands the unavoidable evil.
Adherence to and sanctity of rules that were initially agreed upon. The rules may be subtly changed, but an emphasis on rule-based behavior would guarantee better compliance.
Right framing of the issues concerned. Insteada of ‘hurting the participant’ framing it as ‘improving the learners learning ability’. Regular readers will note how committed I personally am to the framing effects.
Diffusion/ abdication of responsibility: Either enabling the responsibility for the evil act to be taken upon by a senior authority; or by having many non-rebelling peers diffuse responsibility similar to the bystander effect.
Small evil acts initially to reduce the resistance to recruitment. Once into the fold, one may increase the atrocities demanded from the perpetrator.
Gradual increase in the degree of the evil act. Sudden and large jumps in evilness of the acts are bound to be resisted more.
Morphing the Authority from just and reasonable initially to unjust and unreasonable in the later parts.
High exit costs. You cannot beat the system, so better join it! The system can beat you up, so better remain in it!! also, allow dissent or freedom of voice, but suppress freedom of action!!!
An overarching lie or framework or ‘cover story’ that gives a positive spin to the evil acts (in good terms)..’this experiment would help humanity’ , ‘Jews are bad/inferior and need to be eliminated’ etc.
Zimbardo is hopeful that by recognizing these factors that normally help in compliance to unjust and irrational authority, one can have the courage and acumen to resist such authority. The two traits he picks up are taking responsibility for one’s own acts and asserting one’s own authority.
The word character is normally frowned upon, and rarely used, in psychological discourses nowadays, but like Zimbardo I would like to highlight Eric Fromm’s works like Escape from Freedom in this regard, which posit that one can overcome the natural tendency to escape from one’s freedom and sense of responsibility and make a positive character or habitual behavioral tendencies that takes full responsibility for the self.
There is another related debate to which I would like to draw attention. Normally it is posited that we are composed of temperaments or personality traits ( the most famous being the Big Five or OCEAN traits) and much of our behavior is a result of our inherent tendencies.
A dissenting voice is of Walter Mischel , who claims that the concept of personality is vague and much of behavior is due to situational factors. I’m sure the truth is more towards a middle ground and like genes and environment, both personality and situations affect a behavioral outcome. Not stopping here I also see a role here for acquired propensities or habits or character that can overcome both the underlying propensities and the situational factors. Even after taking character into account our acts may not be totally non-deterministic or free or non-predictable, but could be free in a limited sense that we, ourselves, incorporated those habits/ character traits. We may still behave predictably, but that would not be due to our conditionings or situational factors; but because of an acquired character.
Artificial Neural Networks have historically focussed on modeling the brain as a collection of interconnected neurons. The individual neurons aggregate inputs and either produce an on/off output based on threshold values or produce a more complex output as a linear or sigmoid function of their inputs. The output of one neuron may go to several other neurons.
Not all inputs are equivalent and the inputs to the neuron are weighed according to a weight assigned to that input connection. This mimics the concept of synaptic strength. The weights can be positive (signifying an Excitatory Post-Synaptic Potential ) or negative (signifying an Inhibitory Post-Synaptic Potential).
Learning consists of the determination of correct weights that need to be assigned to solve the problem; i.e. to produce a desired output, given a particular input. This weight adjustment mimics the increase or decrease of synaptic strengths due to learning. Learning may also be established by manipulating the threshold required by the neuron for firing. This mimics the concept of long term potentiation (LTP).
The model generally consists of an input layer (mimicking sensory inputs to the neurons) , a hidden layer (mimicking the association functions of the neurons in the larger part of the brain) and an output layer ( mimicking the motor outputs for the neurons).
This model is a very nice replication of the actual neurons and neuronal computation, but it ignores some of the other relevant features of actual neurons:
1. Neuronal inputs are added together through the processes of both spatial and temporal summation. Spatial summation occurs when several weak signals are converted into a single large one, while temporal summation converts a rapid series of weak pulses from one source into one large signal. The concept of temporal summation is generally ignored. The summation consists exclusively of summation of signals from other neurons at the same time and does not normally include the concept of summation across a time interval.
2. Not all neuronal activity is due to external ‘inputs’. Many brain regions show spontaneous activity, in the absence of any external stimulus. This is not generally factored in. We need a model of brain that takes into account the spontaneous ‘noise’ that is present in the brain, and how an external ‘signal’ is perceived in this ‘noise’. Moreover, we need a model for what purpose does this ‘noise’ serve?
3. This model mimics the classical conditioning paradigm, whereby learning is conceptualized in terms of input-output relationships or stimulus-response associations. It fails to throw any light on many operant phenomenon and activity, where behavior or response is spontaneously generated and learning consist in the increase\decrease \ extinction of the timing and frequency of that behavior as a result of a history of reinforcement. This type of learning accounts for the majority of behavior in which we are most interested- the behavior that is goal directed and the behavior that is time and context and state-dependent. The fact that a food stimulus, will not always result in a response ‘eat’, but is mediated by factors like the state (hunger) of the organism, time-of-day etc. is not explainable by the current models.
4. The concept of time, durations and how to tune the motor output as per strict timing requirements has largely been an unexplored area. While episodic learning and memory may be relatively easier to model in the existing ANNs, its my hunch that endowing them with a procedural memory would be well nigh impossible using existing models.
Over a series of posts, I would try to tackle these problems by enhancing the existing neural networks by incorporating some new features into it, that are consistent with our existing knowledge about actual neurons.
First, I propose to have a time-threshold in each neural unit. This time-threshold signifies the duration in which temporal summation is applicable and takes place. All inputs signals, that are received within this time duration, either from repeated firing of the same input neuron or from time-displaced firings of different input neurons, are added together as per the normal input weights and if at any time this reaches above the normal threshold-for-firing, then the neuron fires. This has combined both temporal and spatial summation concepts. With temporal summation, we have an extra parameter- the time duration for which the history of inputs needs to be taken into account.
All neurons will also have a very short-term memory, in the sense that they would be able to remember the strengths of the inputs signals that they have received in the near past , that is in the range of the typical time-thresholds that are set for them. This time-threshold can typically be in milliseconds.
Each time a neuron receives an input, it starts a timer. This timer would run for a very small duration encoded as the time-threshold for that neuron. Till the time this timer is running and has not expired, the input signal is available to the neuron for calculation of total input strength and for deciding whether to fire or not. As soon as the timer expires, the memory of the associated input is erased from the neurons memory and that particular input would no longer be able to affect any future firing of the neuron.
All timers as well as the memory of associated input signals are erased after each successful neural firing (every time the neuron generates an action potential). After each firing, the neuron starts from afresh and starts accumulating and aggregating the inputs it receives thereafter in the time-threshold window that is associated with it.
Of course there could be variations to this. Just like spatial aggregation/firing need not be an either/or decision based on a threshold; the temporal aggregation/ firing need not be an either-or decision: one could have liner or sigmoid functions of time that modulate the input signal strength based on the time that has elapsed. One particular candidate mechanism could be a radioactive decay function, that decreases the input signal strength by half after each half-life. Here, the half-life is equivalent to the concept of a time-threshold. While in the case of time-threshold, after a signal arrives, and once the time-threshold has elapsed, then the input signal is not available to the neuron at all, and while the time-threshold had not elapsed the signal was available in its entirety; in the case of radioactive deacy the inpiut signal is available till infinity in theory; but the strength of the signal would get diminisehd by half after each half-life period; thus making the effects of the input signal negligible after a few half-lives. Of course in the radioactive case too, once the neuron has fired, all memory of that input would be erased and any half-life decay computations stopped.
These are not very far-fetched speculations and modeling the neural networks this way can lead to many interesting results.
Second, I propose to have some ‘clocks’ or ‘periodic oscillators’ in the network, that would be generating spontaneous outputs after a pre-determined time and irrespective of any inputs. Even one such clock is sufficient for our discussions. Such a clock or oscillator system is not difficulty to envisage or conceive. We just need a non-random, deterministic delay in the transmission of signals from one neuron to the other. There do exist systems in the brain that delay the signals, but leaving aside such specialized systems, even a normal synaptic transmission along an axon between two neurons, would suffer from some deterministic delay based on the time it takes the signal to travel down the axon length and assuming that no changes in myelination takes place over time, so that the speed of transmission is constant.
In such a scenario, the time it takes for a signal to reach the other neuron, is held constant over time. (Note that this time may be different for different neuron pairs based on both the axon lengths involved and the associated myelination, but would be same for the same neuron pair over time). Suppose that both the neurons have very long, unmyelinated axons and that these axons are equal in length and provide inputs to each other. Further suppose that both the neurons do not have any other inputs , though each may send its output to many other neurons.
Thus, the sole input of the first neuron is the output of the second neuron and vice versa. Suppose that the thresholds of the two neurons are such that each would trigger, if it received a single input signal (from the peer neuron). As there would be a time lag between the firing of neuron one, and its reaching the second neuron, the second neuron would fire only after, say 5 milliseconds, the time it takes for signal to travel, after the first neuron has fired. The first neuron meanwhile will respond to the AP generated by the second neuron -which would reach it after (5+5= 10 ms) the round trip delay- and generate an AP after 10 ms from its initial firing.
We of course have to assume that somehow, the system was first put in motion: someone caused the first neuron to fire initially (this could not be other neurons, as we have assumed that this oscillator pair has no external input signals) and after that it is a self-sustaining clock with neuron 1 and neuron 2 both firing regularly at 10 ms intervals but in opposite phases. We just need GOD to initally fire the first neuron (the park of life) and thereafter we do have a periodic spontaneous activity in the system.
Thirdly, I propose that this ‘clock’, along with the concept of temporal summations, is able to calculate and code any arbitrary time duration and any arbitrary time dependent behavior, but in particular any periodic or sate/ goal based behavior. I’ve already discussed some of this in my previous posts and elaborate more in subsequent posts.
For now, some elementary tantalizing facts.
1. Given a 10 ms clock and a neuron capable of temporal summation over 50 ms duration, we can have a 50 ms clock: The neuron has the sole input as the output of the 10ms clock. After every 50 ms, it would have accumulated 5 signals in its memory. If the threshold-for-firing of the neuron is set such that it only fires if it has received five time the signal strength that is outputted by the 10 ms clock , then this neuron will fire after very 50 ms. This neuron would generate a periodic output after every 50 ms and implements a 50 ms clock.
2. Given a 10 ms clock and a neuron capable of temporal summation over 40 ms, (or lets have the original 50 ms time-threshold neuron, but set its threshold-for-firing to 4 times the output strength of the 10 ms clock neuron) , using the same mechanism as defined above, we can have a 40 ms clock.
3. Given a 40 ms clock, a 50 ms clock and a neuron that does not do temporal summation, we can have a 2000 ms clock. The sole inputs to the neuron implementing the 2000 ms clock are the outputs of the 50 ms and the 40 ms clock. This neuron does not do temporal summation. Its threshold for firing is purely spatial and it fires only if it simultaneously receives a signal strength that is equal to or greater than the combined output signal strength of 50ms and 40 ms neuron. It is easy to see, that if we assume that the 50 ms and 40 ms neurons are firing in phase, then only after every 2000 ms would the signals from the two neurons arrive at the same time for this 2000ms clock. Viola, we have 2000 ms clock. After this, I assume, its clear that the sky is the limit as to the arbitrariness of the duration that we can code for.
Lastly, learning consists of changing the temporal thresholds associated with a neuron, so that any arbitrary schedule can be associated with a behavior, based on the history of reinforcement. After the training phase, the organism would exhibit spontaneous behavior that follows a schedule and could learn novel schedules for novel behaviors (transfer of learning).
To me all this seems very groundbreaking theorizing and I am not aware of how and whether these suggestions/ concepts have been incorporated in existing Neural Networks. Some temporal discussions I could find here. If anyone is aware of such research , do let me know via comments or by dropping a mail. I would be very grateful. I am especially intrigued by this paper (I have access to abstract only) and the application of temporal summation concepts to hypothalamic reward functions.
Descartes held that non-human animals are automata: their behavior is explicable wholly in terms of physical mechanisms. He explored the idea of a machine which looked and behaved like a human being. Knowing only seventeenth century technology, he thought two things would unmask such a machine: it could not use language creatively rather than producing stereotyped responses, and it could not produce appropriate non-verbal behavior in arbitrarily various situations (Discourse V). For him, therefore, no machine could behave like a human being. (emphasis mine)
To me this seems like a very reasonable and important speculation: although we have learned a lot about how we are able to generate an infinite variety of creative sentences using the generative grammar theory of Chomsky (I must qualify, we only know how to create a new grammatically valid sentence-the study of semantics has not complimented the study in syntax – so we still do not know why we are also able to create meaningful sentences and not just grammatically correct gibberish like “Colorless green ideas flow furiously” : the fact that this grammatically correct sentence is still interpretable by using polysemy , homonymy or metaphorical sense for ‘colorless’, ‘green’ etc may provide the clue for how we map meanings -the conceptual Metaphor Theory- but that discussion is for another day), we still do not have a coherent theory of how and why we are able to produce a variety of behavioral responses in arbitrarily various situations.
If we stick to a physical, brain-based, reductionist, no ghost-in-the-machine, evolved-as-opposed-to-created view of human behavior, then it seems reasonable that we start from the premise of humans as an improvement over the animal models of stimulus-response (classical conditioning) or response-reinforcement (operant conditioning) theories of behavior and build upon them to explain how and what mechanism Humans have evolved to provide a behavioral flexibility as varied, creative and generative as the capacity for grammatically correct language generation. The discussions of behavioral coherence, meaningfulness, appropriateness and integrity can be left for another day, but the questions of behavioral flexibility and creativity need to be addressed and resolved now.
I’ll start with emphasizing the importance of response-reinforcement type of mechanism and circuitry. Unfortunately most of the work I am familiar with regarding the modeling of human brain/mind/behavior using Neural Networks focuses on the connectionist model with the implicit assumption that all response is stimulus driven and one only needs to train the network and using feedback associate a correct response with a stimulus. Thus, we have an input layer for collecting or modeling sensory input, a hidden association layer and an output layer that can be considered as a motor effector system. This dissociation of input acuity, sensitivity representation in the form of input layer ; output variability and specificity in the form of an output layer; and one or more hidden layers that associate input with output and may be construed as an association layer maps very well to our intuitions of a sensory system, a motor system and an association system in the brain to generate behavior relevant to external stimuli/situations. However, this is simplistic in the sense that it is based solely on stimulus-response types of associations (the classical conditioning) and ignores the other relevant type of association response-reinforcement. Let me clarify that I am not implying that neural networks models are behavioristic: in the form of hidden layers they leave enough room for cognitive phenomenon, the contention is that they not take into account the operant conditioning mechanisms. Here it is instructive to note that feedback during training is not equivalent to operant-reinforcement learning: the feedback is necessary to strengthen the stimulus-response associations; the feedback only indicates that a particular response triggered by the particular stimuli was correct.
For operant learning to take place, the behavior has to be spontaneously generated and based on the history of its reinforcement its probability of occurrence manipulated. This takes us to an apparently hard problem of how behavior can be spontaneously generated. All our life we have equated reductionism and physicalism with determinism, so a plea to spontaneous behavior seems almost like begging for a ghost-in-the-machine. Yet on careful thinking the problem of spontaneity (behavior in absence of stimulus) is not that problematic. One could have a random number generator and code for random responses as triggered by that random number generator. One would claim that introducing randomness in no way gives us ‘free will’, but that is a different argument. What we are concerned with is spontaneous action, and not necessarily, ‘free’ or ‘willed’ action.
To keep things simple, consider a periodic oscillator in your neural network. Lets us say it has a duration of 12 hours and it takes 12 hours to complete one oscillation (i.e. it is a simple inductor-capacitor pair and it takes 6 hours for capacitor to discharge and another 6 hours for it to recharge) ; now we can make connections a priori between this 12 hr clock in the hidden layer and one of the outputs in the output layer that gets activated whenever the capacitor has fully discharged i.e. at a periodic interval of 12 hours. Suppose that this output response is labeled ‘eat’. Thus we have coded in our neural networks a spontaneous mechanism by which it ‘eats’ at 12 hour durations.
Till now we haven’t really trained our neural net, and moreover we have assumed a circuitry like a periodic oscillator in the beginning itself, so you may object to this saying this is not how our brain works. But let us be reminded that just like normal neurons in the brain which form a model for neurons in the neural network, there is also a suprachiasmatic nuclei that gives rise to circadian rhythms and implements a periodic clock.
As for training, one can assume the existence of just one periodic clock of small granularity, say 1 second duration in the system, and then using accumulators that code for how many ticks have elapsed since past trigger, one can code for any arbitrary periodic response of greater than one second granularity. Moreover, one need not code for such accumulators: they would arise automatically out of training from the other neurons connected to this ‘clock’ and lying between the clock and the output layer. Suppose, that initially, to an output marked ‘eat’ a one second clock output is connected (via intervening hidden neuron units) . Now, we have feedback in this system also. Suppose, that while training, we provide positive feedback only on 60*60*12 trials (and all its multiples) and provide negative feedback on all other trials, it is not inconceivable to believe that an accumulator neural unit would get formed in the hidden layer and count the number of ticks that come out of the clock: it would send the trigger to output layer only on every 60*60*12 th trial and suppress the output of the clock on every other trial. Viola! We now have a 12 hour clock (which is implemented digitally using counting ticks) inside our neural network coding for a 12 hour periodic response. We just needed to have one ‘innate’ clock mechanism and using that and the facts of ‘operant conditioning’ or ‘response-reinforcement’ pairing we can create an arbitrary number of such clocks in our body/brain. Also, please notice the fact, that we need just one 12 hour clock, but can flexibly code for many different 12 hour periodic behaviors. Thus, if the ‘count’ in accumulator is zero, we ‘eat’; if the count is midway between 0 and 60*60*12, we ‘sleep’. Thus, though both eating and sleeping follow a 12 hour cycle, they do not occur concurrently, but are separated by a 6 hour gap.
Suppose further, that one reinforcement that one is constantly exposed to and that one uses for training the clock is ‘sunlight’. The circadian clock is reinforced, say only by the reinforcement provided by getting exposed to the mid noon sun, and by no other reinforcements. Then, we have a mechanism in place for the external tuning of our internal clocks to a 24 hour circadian rhythm. It is conceivable, that for training other periodic operant actions, one need not depend on external reinforcement or feedback, but may implement an internal reinforcement mechanism. To make my point clear, while ‘eat’ action, i.e. a voluntary operant action, may get generated randomly initially, and in the traditional sense of reinforcement, be accompanied by intake of food, which in the classical sense of the word is a ‘reinforcement’; the intake of food, which is part-and-parcel of the ‘eat’ action should not be treated as the ‘feedback’ that is required during training of the clock. During the training phase, though the operant may be activated at different times (and by the consequent intake of food be intrinsically reinforced) , the feedback should be positive only for the operant activations inline with the periodic training i.e. only on trials on which the operant is produces as per the periodic training requirement; and for all other trails negative feedback should be provided. After the training period, not only would operant ‘eat’ be associated with a reinforcement ‘food’: it would also occur as per a certain rhythm and periodicity. The goal of training here is not to associate a stimulus with a response ( (not the usual neural networks association learning) , but to associate a operant (response) with a schedule(or a concept of ‘time’). Its not that revolutionary a concept, I hope: after all an association of a stimulus (or ‘space’) with response per se is meaningless; it is meaningful only in the sense that the response is reinforced in the presence of the stimulus and the presence of the stimulus provides us a clue to indulge in a behavior that would result in a reinforcement. On similar lines, an association of a response with a schedule may seem arbitrary and meaningless; it is meaningful in the sense that the response is reinforced in the presence of a scheduled time/event and the occurrence of the scheduled time/event provides us with a reliable clue to indulge in a behavior that would result in reinforcement.
To clarify, by way of an example, ‘shouting’ may be considered as a response that is normally reinforcing, because of say its being cathartic in nature . Now, ‘shouting’ on seeing your spouse”s lousy behavior may have had a history of reinforcement and you may have a strong association between seeing ‘spouse’s lousy behavior’ and ‘shouting’. You thus have a stimulus-response pair. why you don’t shout always, or while say the stimuli is your ‘Boss’s lousy behavior’, is because in those stimulus conditions, the response ‘shouting’, though still cathartic, may have severe negative costs associated, and hence in those situations it is not really reinforced. Hence, the need for an association between ‘spouse lousy behavior’ and ‘shouting’ : only in the specific stimulus presence is shouting reinforcing and not in all cases.
Take another example that of ‘eating’, which again can be considered to be a normally rewarding and reinforcing response as it provides us with nutrition. Now, ‘eating’ 2 or 3 times in a day may be rewarding; but say eating all the time, or only on 108 hours periodicity may not be that reinforcing a response, because that schedule does not take care of our body requirements. While eating on a 108 hours periodicity would impose severe costs on us in terms of under nutrition and survival, eating on 2 mins periodicity too would not be that reinforcing. Thus, the idea of training of spontaneous behaviors as per a schedule is not that problematic.
Having taken a long diversion, arguing for a case for ‘operant conditioning’ based training of neural networks, let me come to my main point.
While ‘stimulus’ and the input layer represent the external ‘situation’ that the organism is facing, the network comprising of the clocks and accumulators represent the internal state and ‘needs’ of the organism. One may even claim, a bit boldly, that they represent the goals or motivations of the organism.
A ‘eat’ clock that is about to trigger a ‘eat’ response, may represent a need to eat. This clock need not be a digital clock, and only when the 12 hour cycle is completed to the dot, an ‘eating’ act triggered. Rather, this would be a probabilistic, analog clock, with the ‘probability’ of eating response getting higher as the 12 hour cycle is coming to an end and the clock being rest, whenever the eating response happens. If the clock is in the early phases of the cycle (just after an eating response) then the need for eating (hunger) is less; when the clock is in the last phases of the cycle the hunger need is strong and would likely make the ‘eating’ action more and more probable.
Again, this response-reinforcement system need not be isolated from the stimulus-response system. Say, one sees the stimulus ‘food’, and the hunger clock is still showing ‘medium hungry’. The partial activation of the ‘eat’ action (other actions like ‘throw the food’, ignore the food, may also be activated) as a result of seeing the stimulus ‘food’ may win over other competing responses to the stimuli, as the hunger clock is still activating a medium probability of ‘hunger’ activation and hence one may end up acting ‘eat’. This however, may reset the hunger clock and now a second ‘food’ stimulus may not be able to trigger ‘eat’ response as the activation of ‘eat’ due to ‘hunger clock’ is minimal and other competing actions may win over ‘eat’.
To illustrate the interaction between stimulus-response and response-reinforcement in another way, on seeing a written word ‘hunger’ as stimulus, one consequence of that stimulus could be to manipulate the internal ‘hunger clock’ so that its need for food is increased. this would be simple operation of increasing the clock count or making the ‘need for hunger’ stronger and thus increasing the probability of occurrence of ‘eat’ action.
I’ll also like to take a leap here and equate ‘needs’ with goals and motivations. Thus, some of the most motivating factors for humans like food, sex, sleep etc can be explained in terms of underlying needs or drives (which seem to be periodic in nature) and it is also interesting to note that many of them do have cycles associated with them and we have sleep cycles or eating cycles and also the fact that many times these cycles are linked with each other or the circadian rhythm and if the clock goes haywire it has multiple linked effects affecting all the motivational ‘needs’ spectrum. In a mainc pahse one would have low needs to sleep, eat etc, while the opposite may be true in depression.
That brings me finally to Marvin Minsky and his AI attempts to code for human behavioral complexity.
In his analysis of the levels of mental activity, he starts with the traditional if, then rule and then refines it to include both situations and goals in the if part. To me this seems intuitively appealing: One needs to take into account not only the external ‘situation’, but also the internal ‘goals’ and then come up with a set of possible actions and maybe a single action that is an outcome of the combined ‘situation’ and ‘goals’ input.
However, Minsky does not think that simple if-then rules, even when they take ‘gaols’ into consideration would suffice, so he posits if-then-result rules. To me it is not clear how introducing a result clause makes any difference: Both goals and stimulus may lead to multiple if-then rule matches and multiple actions activation. These action activations are nothing but what Minsky has clubbed in the result clause and we still have the hard problem of given a set of clauses, how do we choose one of them over other.
Minsky has evidently thought about this and says:
What happens when your situation matches the Ifs of several different rules? Then you’ll need some way to choose among them. One policy might arrange those rules in some order of priority. Another way would be to use the rule that has worked for you most recently. Yet another way would be to choose rules probabilistically.
To me this seems not a problem of choosing which rule to use, but that of choosing which response to choose given several possible responses as a result of application of several rules to this situation/ goal combination. It is tempting to assume that the ‘needs’ or ‘gaols’ would be able to uniquely determine the response given ambiguous or competing responses to a stimulus; yet I can imagine a scenario where the ‘needs’ of the body do not provide a reliable clue and one may need the algorithms/heuristics suggested by Minsky to resolve conflicts. Thus, I see the utility of if-then-result rules: we need a representation of not only the if part (goals/ stimulus) in the rule; which tells us what is the set of possible actions that can be triggered by this stimulus/ situation/ needs combo; but also a representation of the results part of the rule: which tells us what reinforcement values these response(actions) have for us and use this value-response association to resolve the conflict and choose one response over the other. This response-value association seems very much like the operant-reinforcement association, so I am tempted once more to believe that the value one ascribes to a response may change with bodily needs and rather is reflective of bodily needs, but I’ll leave that assumption for now and instead assume that somehow we do have different priorities assigned to the responses ( and not rules as Minsky had originally proposed) and do the selection on the basis of those priorities.
Though I have posited a single priority-based probabilistic selection of response, it is possible that a variety of selection mechanisms and algorithms are used and are activated selectively based on the problem at hand.
This brings me to the critic-selector model of mind by Minsky. As per this model, one needs both critical thinking and problem solving abilities to act adaptively. One need not just be good at solving problems- one also has to to understand and frame the right problem and then use the problem solving approach that is best suited to the problem.
Thus, the first task is to recognize a problem type correctly. After recognising a problem correctly, we may apply different selctors or problem solving strategies to different problems.
He also posits that most of our problem solving is analogical and not logical. Thus, the recognizing problem is more like recognizing a past analogical problem; and the selecting is then applying the methods that worked in that case onto this problem.
How does that relate to our discussions of behavioral flexibility? I believe that every time we are presented with a stimulus or have to decide how to behave in response to that stimulus, we are faced with a problem- that of choosing one response over all others. We need to activate a selection mechanism and that selection mechanism may differ based on the critics we have used to define the problem. If the selection mechanism was fixed and hard-wired then we wont have the behavioral flexibility. Because the selection mechanism may differ based on our framing of the problem in terms of the appropriate critics, hence our behavioral response may be varied and flexible. At times, we may use the selector that takes into account only the priorities of different responses in terms of the needs of the body; at other times the selector may be guided by different selection mechanisms that involve emotions and values us the driving factors.
Minsky has also built a hierarchy of critics-selector associations and I will discuss them in the context of developmental unfolding in a subsequent post. For now, it is sufficient to note that different types of selection mechanisms would be required to narrow the response set, under different critical appraisal of the initial problem. To recap, a stimulus may trigger different responses simultaneously and a selection mechanism would be involved that would select the appropriate response based on the values associated with the response and the selection algorithm that has been activated based on our appraisal of the reason for conflicting and competing responses. while critics help us formulate the reason for multiple responses to the same stimuli, the selector helps us to apply different selection strategies to the response set, based on what selection strategy had worked on an earlier problem that involved analogous critics.
One can further dissociate this into two processes: one that is grammar-based, syntactical and uses the rules for generating a valid behavioral action based on the critic and selector predicates and the particular response sets and strategies that make up the critic and selector clause respectively. By combining and recombining the different critics and selectors one can make an infinite rules of how to respond to a given situation. Each such rule application may potentially lead to different action. The other process is that of semantics and how the critics are mapped onto the response sets and how selectors are mapped onto different value preferences.
Returning back to the response selection, given a stimulus, clearly there are two processes at work : one that uses the stored if-then rules (the stimulus-response associations) to make available to us a set of all actions that are a valid response to the situation; and the other that uses the then-result rules (and the response-value associations, that I believe are dynamic in nature and keep changing) to choose one of the response from that set as per the ‘subjective’ value that it prefers at the moment. This may be the foundation for the ‘memory’ and ‘attention’ dissociations in working memory abilities used in stroop task and it it tempting to think that the while DLPFC and the executive centers determine the set of all possible actions (utilizing memory) given a particular situation, the ACC selects the competing responses based on the values associated and by selectively directing attention to the selected response/stimuli/rule.
Also, it seems evident that one way to increase adaptive responses would be to become proficient in discriminating stimuli and perceiving the subjective world accurately; the other way would be to become more and more proficient in directing attention to a particular stimulus/ response over others and directing attention to our internal representations of them so that we can discriminate between the different responses that are available and choose between them based on an accurate assessment of our current needs/ goals.
Using his ideas of sensorimotor function, Hughlings-Jacksondescribed two “halves” of consciousness, a subject half (representationsof sensory function) and an object half (representations ofmotor function). To describe subject consciousness, he usedthe example of sensory representations when visualizing an object. The object is initially perceived at all sensory levels.This produced a sensory representation of the object at allsensory levels. The next day, one can think of the object andhave a mental idea of it, without actually seeing the object.This mental representation is the sensory or subject consciousnessfor the object, based on the stored sensory information of theinitial perception of it.
What enables one to think of the object? This is the other halfof consciousness, the motor side of consciousness, which Hughlings-Jacksontermed “object consciousness.” Object consciousness is the facultyof “calling up” mental images into consciousness, the mentalability to direct attention to aspects of subject consciousness.Hughlings-Jackson related subject and object consciousness asfollows:
The substrata of consciousness are double, as we might inferfrom the physical duality and separateness of the highest nervouscentres. The more correct expression is that there are two extremes.At the one extreme the substrata serve in subject consciousness.But it is convenient to use the word “double.”
Hughlings-Jackson saw the two halves of consciousness as constantly interacting with each other, the subjective half providing a store of mental representations of information that the objective half used to interact with the environment.
The term “subjective” answers to what is physically the effect of the environment on the organism; the term “objective” to what is physically the reacting of the organism on the environment.
Hughlings-Jackson’s concept of subjective consciousness is akin to the if-then representation of mental rules.One needs to perceive the stimuli as clearly as possible and to represent them along with their associated actions so that an appropriate response set can be activated to respond to the environment. His object consciousness is the attentional mechanism that is needed to narrow down the options and focus on those mental representations and responses that are to be selected and used for interacting with the environment.
As per him, subject and object consciousness arise form a need to represent the sensations (stimuli) and movements (responses) respectively and this need is apparent if our stimulus-response and response-reinforcement mappings have to be taken into account for determining appropriate action.
All nervous centres represent or re-represent impressions andmovements. The highest centres are those which form the anatomicalsubstrata of consciousness, and they differ from the lower centresin compound degree only. They represent over again, but in morenumerous combinations, in greater complexity, specialty, andmultiplicity of associations, the very same impressions andmovements which the lower, and through them the lowest, centresrepresent.
He had postulated that temporal lobe epilepsy involves a loss in objective consciousness (leading to automatic movements as opposed to voluntary movements that are as per a schedule and do not happen continuously) and a increase in subjective consciousness ( leading to feelings like deja-vu or over-consciousness in which every stimuli seems familiar and triggers the same response set and nothing seems novel – the dreamy state). These he described as the positive and negative symptoms or deficits associated with an epileptic episode. It is interesting to note that one of the positive symptom he describes of epilepsy, that is associated with subjective consciousness of third degree, is ‘Mania’ : the same label that Minsky uses for a Critic in his sixth self-consciousness thinking level of thinking. The critic Minsky lists is :
Self-Conscious Critics. Some assessments may even affect one’s current image of oneself, and this can affect one’s overall state:
None of my goals seem valuable. (Depression.) I’m losing track of what I am doing. (Confusion.)
I can achieve any goal I like! (Mania.) I could lose my job if I fail at this. (Anxiety.)
Would my friends approve of this? (Insecurity.)
Interesting to note that this Critic or subjective appraisal of the problem in terms of Mania can lead to a subjective consciousness that is characterized as Mania.
If Hughlings-Jackson has been able to study epilepsy correctly and has been able to make some valid inferences, then this may tell us a lot about how we respond flexibly to novel/ familiar situations and how the internal complexity that is required to ensure flexible behavior, leads to representational needs in brain, that might lead to the necessity of consciousness.
Chomsky, in a classical paper, discusses Skinner’s book Verbal Behavior and the associated attempts of behaviorists to explain Language Acquisition as just another complex behavior learned entirely through behaviorist mechanisms of classical and operant conditioning.
Chomsky himself clarifies the difference between cognitive and behaviorist explanations as follows:
It is important to see clearly just what it is in Skinner’s program and claims that makes them appear so bold and remarkable, It is not primarily the fact that he has set functional analysis as his problem, or that he limits himself to study of observables, i.e., input-output relations. What is so surprising is the particular limitations he has imposed on the way in which the observables of behavior are to be studied, and, above all, the particularly simple nature of the function which, he claims, describes the causation of behavior. One would naturally expect that prediction of the behavior of a complex organism (or machine) would require, in addition to information about external stimulation, knowledge of the internal structure of the organism, the ways in which it processes input information and organizes its own behavior. These characteristics of the organism are in general a complicated product of inborn structure, the genetically determined course of maturation, and past experience. …… The differences that arise between those who affirm and those who deny the importance of the specific “contribution of the organism” to learning and performance concern the particular character and complexity of this function, and the kinds of observations and research necessary for arriving at a precise specification of it. If the contribution of the organism is complex, the only hope of predicting behavior even in a gross way will be through a very indirect program of research that begins by studying the detailed character of the behavior itself and the particular capacities of the organism involved.
It would be prudent for me to clarify at the outset, that I am a Cognitivist and definitely see the merits of Chomsky’s arguments and the inadequacy of the potentially misguided attempts of Skinner and other behaviorists to apply the behaviorist concepts and results derived from animal studies to the study of semantics or how words get associated with a particular meaning and are used in particular contexts – either due to their prior association with a stimulus (stimulus control…something like classical conditioning in which the word ‘red’ gets associated with the property redness of an object and the internal visual response or qualia of redness that is produced automatically in response to the stimulus redness causes a conditioned association between “red’ and the qualia redness) or because the word or sentence was reinforced variably through various mechanisms like self-reinforcement, reinforcement-by-way-of-praise etc.
I definitely do not concur with Skinner’s arguments and definitions, and Chomsky show to some extent an understanding of the behaviorist concepts (especially in section II), but he also at times shows his profound lack of appreciation of finer subtleties of behaviorist concepts. For example:
In the book under review, response strength is defined as “probability of emission” (22). This definition provides a comforting impression of objectivity, which, however, is quickly dispelled when we look into the matter more closely. The term probability has some rather obscure meaning for Skinner in this book.9 We are told, on the one hand, that “our evidence for the contribution of each variable [to response strength] is based on observation of frequencies alone” (28). At the same time, it appears that frequency is a very misleading measure of strength, since, for example, the frequency of a response may be “primarily attributable to the frequency of occurrence of controlling variables” (27). It is not clear how the frequency of a response can be attributable to anything BUT the frequency of occurrence of its controlling variables if we accept Skinner’s view that the behavior occurring in a given situation is “fully determined” by the relevant controlling variables.
Here Chomsky has mixed and made a mess of the two separate concepts and processes in behaviorism- classical and operant conditioning. In the above paragraph, the definition of response in terms of ‘probability of occurrence’ is in terms of operant conditioning – wherein responses are autonomously generated by an organism irrespective of any stimulus (leave aside the case of discriminating stimulus as of now) that is present – for e.g. a bar-press- and based on the reinforcing stimulus that is presented to the organism , post response, the response strength or probability that the response would occur, autonomously, in future , increases. This is mixed up with the earlier concept of stimulus control (or classical conditioning) wherein controlling variables (or conditioned stimulus) relevant to a situation lead to an utterance or verbal behavior. This determining of verbal behavior due to presence of a a conditioned stimulus (reflexive language) would be a different mechanism form that used in deliberative language , wherein, an utterance is produced voluntarily and in defiance of its surrounding stimuli, but the probability of that occurrence is in proportion to its history of reinforcement. By mixing the two concepts, Chomsky just manages to show his ignorance and lack of appreciation of the behaviorist concepts/ mechanisms.
But my gripe with Chomsky is more for the change in focus that he has managed to pull off, with the study of semantics taking a backseat to the study of grammar or syntax. In my limited comprehension, I am unable to appreciate, how concepts of Universal Grammar, however much relevant and innate, could be a substitute for a proper analysis of language acquisition in terms of an ability to not only mastered the grammar, but also the semantics. Grammar or Grammar acquisition, per se, does not inform much about the actual and most relevant aspects of language acquisition- viz. semantics and pragmatics.
Addressing semantics, would be a task for a later day (and perhaps for a more capable person than me), but today I would like to tentatively propose a role for behaviorist concepts of reinforcement or operant conditioning as relevant to the general ability to understand and produce language and also to the general difference in talkativeness (and listening-ness, if there exists such a concept) between different people
Language acquisition should be broken into two components – a language understanding (or hearing) component and another language production (or speaking) component. It is a fact that the first component related to language understanding develops prior to language production. Also, it should be kept in mind that language is essentially a two person activity, with the utterance of one acting as (reinforcing) stimulus for the other and the utterance of another acting as a response.
The Hearing (or language understanding) activity:
This language component is used for understanding the meaning of utterances (say spoken language as opposed to written or depicted using sign language) and is relatively independent of language production.
Response is parsing the spoken sentence into words and by analyzing the syntax and meaning of the words constructing a mental image of the intention, beliefs, knowledge and possible behavior of the person who spoke the sentence and integrate that knowledge with the representation and expectation of the world in general.
Reinforcing stimulus is observing the behavior of the person who spoke the sentence to be in accordance with that earlier constructed expectation and prediction (the hearing response). It is assumed that an external act (whether negative or positive) that is in accord with an internal expectation would be rewarding in the sense that it would satisfy and reduce the internal drive to know in general – and to know the future in particular. Alternately, it can be posited that the state of not knowing clearly about the future is a state of unbearable tension and the uncertainty associated with the world is a negative stimulus (property) associated with the world. By hearing and understanding a sentence uttered by someone else, some of this aversive stimulus (uncertainty) is removed and thus by negative reinforcement (removal of an aversive stimulus) any act of hearing (or understanding…or refining the predictions regarding the world) is inherently rewarding irrespective of whether the actual outcome is as per the constructed expectations. Positive reinforcement of having the expectation met would result in strengthening of the hearing response. This is a general strengthening of the hearing response (or the response of creating expectations from heard utterances) and is independent of the actual content of that expectation. Thus, if an effort to construe meaning from an utterance was followed by a positive reinforcement of having that meaning verified, then the propensity of construing meaning from utterances would increase in strength. It is posited that this behaviorist mechanism is one of the strong motivating factor that encourages a child to understand the language of its parents/ society.
Although as adults, parsing sentences into words and extracting meaning from it seems automatic to us, for a child extracting meaning from a string of syllables is a very effort full activity, and the fact that doing so leads to positive reinforcement would encourage the child to pay attention to the hearing and understanding activity and increase their habit strength. The alternate mechanism to such a behavioristically mediated hearing acquisition could be claiming that development of language understanding is under genetic control and is similar to imprinting or genetic unfolding. This claim is weakened by an ability of mature adults to learn a foreign language. Thus, if this mechanism uses imprinting alone, it should be possible only under a critical period of childhood and not amenable to acquisition in adulthood. the fact that children are able to learn second languages faster and better than adults and some evidence form study of feral children as to a critical period necessary for first language acquisition, point to a mixed role of genetic factors like imprinting and behaviorist factors like reinforcement of the ‘predicting the world capability’. The Speaking (or language production) activity:
This language component is used for production of meaningful utterances (say spoken language as opposed to written or depicted using sign language) and follows the relevant stage of language comprehension.
Response, in this case, would be constructing a valid, informative sentence by piecing together words that denote the shared meaning of objects and situations and uttering a valid meaningful sentence directed towards another listener. The intention for the utterance could be pedagogic (informing or teaching a fact to someone about whom you care), instrumental (using the person spoken to as a tool to achieve desired outcome), empathetic (sharing thoughts, feeling etc with the other person) or of some other kind.
Reinforcing stimulus, in this case would be observing the behavior of spoken-to person and discovering that the relevant information/ facts have been conveyed and understood properly. This reinforcing stimulus, can take the form of either observing the actual behavior, inline with the intended meaning of the utterance, or can be as subtle as deciphering the facial expressions of the listener for signs of understanding. In elongated verbal conversations, a verbal utterance by the listener, may serve as a reinforcing stimulus, and substitute for the outward behavior/ understanding expression (This for example is relevant in telephonic conversations and is one of the reasons children learn to speak on telephones later than they learn talking to adults face-to-face). The stimulus is reinforcing because it satisfies an earlier drive to control (use the other person as a tool for ones ends), the drive to share (the drive for belongings and intimacy) or the drive to inform (pedagogic drive).
Speaking, or constructing valid sentences by stringing syllables together, is again an effortful activity, and though as an adult it may seem effortless, strong motivations have to be present in childhood, for development of proper language production capabilities. The reinforcing stimulus, of having one’s intentions met, by observing the behaviour of the listener, provides the required incentive and mechanism whereby the habit strength of generating meaningful utterances is strengthened.
How to test for this theory:
It is clear from above discussion, that Hearing or language understanding predominantly relies on the drive for meaning or for predicting the world as its guiding mechanism, whereby the speaking or language production relies on other mechanisms involving drive for control , empathy and instruction.
Specifically, if some subjects are primed with thoughts of death (as opposed to a neutral control topic) , then may exhibit a stringer drive for subsequent activities that give rise to a sense of meaning. This manipulation could be in the form of thinking of the September 11 attacks which increase mortality salience or by asking the participants to read the following instructions designed to increase their mortality salience:
Please briefly describe the emotions that the thought of your own death arouses in you.
Jot down, as specifically as you can, what you think will happen to you physically as you die and once you are physically dead.
The other half of the participants should be made to respond to similar instructions, but in reference to an upcoming exam rather than death.
Afterwards, both groups should be allowed an activity that involves language understanding (say listening to a meaningful audio radio program or conversation) and one that does not involve language understanding (say painting or sketching a drawing). The respondents should then be asked which activities (language comprehension related or visual painting related) they found more satisfying or meaningful. If those who had high mortality salience also showed a preponderant satisfaction by indulging in language comprehension related activities as opposed to control group and control task, then this would be a strong indicator of the importance of meaning formation in the motivation for language comprehension. A particular confound here is the second task ( as per the Mixing Memory task, Art may also serve as a Meaning generator and hence not be a suitable control task and should be replaced by a meaningless task like repetitive manual action task) and it should be ensured that this task does not involve Meaning generation. One control that seems appropriate is language production, as the mechanism underlying that is posited to be different from Meaning acquisition. Thus, the control activity could be related to language production (say allowing the participants to make an extempore speech on a topic for 20 minutes).
Finally, I would like to highlight a real life experiment. Those who participate in a ten day Vipassana Meditation camp are not allowed to speak for those ten days. As such, the amount they hear is also limited to some morning/ evening hymns (that may involve more music than language) and apart from that no other hearing or language understanding takes place. After the ten day speaking fast, when the participants talk to each other, one finds great meaning in the conversations. This may be a case of reduction of the meaning drive, after its prolonged starvation.
Also, the traits like loquaciousness may be explained partially in terms of the different underlying needs for control, empathy, instruction etc that give rise to the talking behavior, as well as the particular history of reinforcement that the subject has undergone, thus making that trait subject to both genetic and environmental influences.
To end on a lighter note, please note the Mixing Memory’s evaluation of such studies linking TMT and Art.
I’ve never really hung out in a social psychology laboratory, but here is how I picture a typical day in one. There are some social psychologists sitting around, drinking some sort of exotic tea, and free associating. One psychologist will say the name of a random social psychological theory, and another will then throw out the first thing that comes into his or her head. They’ll write each of these down, and the associations will then become the basis for their next several research projects. OK, so that’s probably not really what’s going on, and I suppose there’s a more scientific method to the social psychologist’s madness, but occasionally I come across a study that makes me wonder. And the great thing about having a blog is that I get to write about it when I do. Today’s example: terror management theory and modern art
I am, at present, camping in the filed of Social Psychology and thus take the privilege of suggesting a more bizarre study that could possibly prove what we may all intuitively know – that the motivation for hearing something is because we derive meaning from it! (Remember the cocktail party effect, wherein we are able to selectively listen to the conversation of interest- or one that is most meaningful to us). As the say, no research is that abstruse as to not get funded. So all you students out there, anyone care to conduct such a research (and prove me right)!