Category Archives: language

Darwinian Linguistic Evolution

There is a paper by Oudeyer and Kaplan, which discusses the evolution of languages in Darwinian terms. That is a refreshingly new (to me!) take on how languages may evolve. It applies the same Darwinian principles of heritability, variation and selection to individual phonetic words as well as associations between words and meanings.

The article makes use of computer simulations to inform their theory. Some of the take home from that article are:

  • For Linguistic coherence to evolve (that is one word referring to same meaning for different agents), the replication principle most suited is whereby the most frequently encountered word is repeated and thus gets fixated in the population. This scores over the use-the-last-heard phoneme rule and use-the-phoneme-as-per-frequency-in-usage rules.
  • The phonemes that can be easily confused (are liable to mutate more) with nearby phonemes get selected against and thus, selection leads to implicit evaluation whereby those phonemes that do not mutate (or mutate less) are preferred and get fixated.
  • In a population with agents coming and leaving, the population flux ensures that optimal words are used and sub-optimal done away with.
  • The linguistic phonemes (or words) that are used to represent concepts, break the acoustic space in such a way that their is least scope for confusion amongst the phonemes.
  • A trade-off happens between linguistic distinctively and robustness. Some words are long enough that they can mutate more, but are not easily confusable. Other frequently used words are short and do not mutate easily, but if they mutate than more confusion of meaning arises.

There are more such interesting information nuggets in the paper. So why don’t you have a look at the original paper itself.

Hat tip: Babel’s Dawn.

Stories we tell ourselves

NYT has a pretty good article on narratives or the life stories that we tell ourselves to make some sense of our lives. Each of us conjures the disparate experiences that we have had into a coherent narrative or life story of who we are, how we have become like that and where we are headed for.

Narratives are on often ignored aspect of psychology an not much research is done on them, though they are very essential for us and are important in that they give us a framework in which we reconstruct our memories or think about the future.

The article mentions the work of Dr McAdams with narratives and having some types of narratives affect future outcomes. I’ll just quote from the article:

In analyzing the texts, the researchers found strong correlations between the content of people’s current lives and the stories they tell. Those with mood problems have many good memories, but these scenes are usually tainted by some dark detail. The pride of college graduation is spoiled when a friend makes a cutting remark. The wedding party was wonderful until the best man collapsed from drink. A note of disappointment seems to close each narrative phrase.

By contrast, so-called generative adults — those who score highly on tests measuring civic-mindedness, and who are likely to be energetic and involved — tend to see many of the events in their life in the reverse order, as linked by themes of redemption. They flunked sixth grade but met a wonderful counselor and made honor roll in seventh. They were laid low by divorce, only to meet a wonderful new partner. Often, too, they say they felt singled out from very early in life — protected, even as others nearby suffered.

The article also mentions the work of Dr Adler, that links the psychotherapeutic outcomes with the life stories people tell about themselves.

At some level, talk therapy has always been an exercise in replaying and reinterpreting each person’s unique life story. Yet Mr. Adler found that in fact those former patients who scored highest on measures of well-being — who had recovered, by standard measures — told very similar tales about their experiences.

They described their problem, whether depression or an eating disorder, as coming on suddenly, as if out of nowhere. They characterized their difficulty as if it were an outside enemy, often giving it a name (the black dog, the walk of shame). And eventually they conquered it.

“The story is one of victorious battle: ‘I ended therapy because I could overcome this on my own,’ ” Mr. Adler said. Those in the study who scored lower on measures of psychological well-being were more likely to see their moods and behavior problems as a part of their own character, rather than as a villain to be defeated. To them, therapy was part of a continuing adaptation, not a decisive battle.

Lastly, the article touches upon researches that show that manipulating the retrieval of memories in third person vis-a-vis in the first person leads to better outcomes as oneses oneself as better adjusted after third person recall of significant life events.

Two clear differences emerged. Those who replayed the scene in the third person rated themselves as having changed significantly since high school — much more so than the first-person group did. The third-person perspective allowed people to reflect on the meaning of their social miscues, the authors suggest, and thus to perceive more psychological growth.

The recordings showed that members of the third-person group were much more sociable than the others. “They were more likely to initiate a conversation, after having perceived themselves as more changed,” said Lisa Libby, the lead author and a psychologist at Ohio State University. She added, “We think that feeling you have changed frees you up to behave as if you have; you think, ‘Wow, I’ve really made some progress’ and it gives you some real momentum.”

I would love to hear of more literature in this area.

Tags:

Language and Co-operation: Kin Selcetion and ‘Group’ Selection

A recent study has the potential to flare the ‘Is Group selection real?’ debates all over again.

As per the press release, it seems that when mutating robots were subjected to environmental pressures and evolution took place, then they evolved a communication system in all cases except the case where the selection was on individual level and the robots were not related.

In the case where robots were related to each other, a communication system evolved. This is interesting finding as the robots apparently have no way to detecting similarity or kinship; so the evolution of communication could only be on the basis of the fact that similarities in kinship led to their having a high probability of using the same sort of symbols to represent the words and a similar type of grammar. This could turn the kin selection on its face as most of the kin selection examples can now be farmed in terms of similarity or kinship endowing the people with similar shared propensities and thus for co-operation to emerge.

The fact that robots that underwent ‘group’ selection also evolved a communication system is a very fascinating finding that gives back the field of group selection some of its legitimacy and glamor. It has long been theorized that co-operation or altruism occurred in humans because of group selection; but there have been hardcore opponents to this theory who either explain group selection in terms of kinship; or provide alternate explanation involving retribution and punishment for social cheaters. The details of the paper are available here, and the the group selection did not involve punishment of cheaters or social loafers.

Also interesting is to note that in the population of robots that were subjected to individual selection , a primitive form of communication involving deception occurred.

This study has already led to an article relating this to evolution of language in humans. I believe human language evolved in an EEA that involved both Group/ kin selection as well as individual selection; that is why we sometimes use language for miscommunication. I am sure this study will fuel a lot of debate – especially in the Altruism and co-operation evolution blogosphere.

Update : The Panda’s thumb has a good article about similar emergence of co-operation in a different experimnet. Read it to get more in-depth analysis of co-operation ecolution.

Moral Intuitions: Musings continued.

In the last post, we dwelled on the classical trolley problem as well as a new type of moral dilemma that may be termed as the Airplane dilemma.

In some versions of the Airplane (as well as the Trolley ) problem, the problem is framed so as to implore us into examining our notions of trusting or being suspicious of strangers (terrorists scenarios) and to take into account the past as well as future characteristics of these people (like high IQ and national celebrity status) to arrive at a moral decision, as to serving whom would be a more moral action for the doctor. The airplane problem mostly focuses on Trust Vs Suspiciousness dimension, is people-centered and focuses on assessing people and situations correctly in a limited amount of time. After the decision is made, then the action is more or less straight-forward.

The trolley problem is also similar, but of a somewhat different nature. Here, the focus is on actions and outcomes. The Morality of action is judged by its outcome as well as other factors like whether the (in) action was due to negligence, indirect, personalty motivated etc. The people centered focus is limited to using-as-means versus ends-in-themselves distinction and in the later problems (president-in-the-yard) that of guilty vs innocent. The innocent, careful child playing on unused track, while the careless , ignorant five idiots playing on the used track is another variation that plays on this careful action versus careless action distinction.

It is my contention that while the Trolley problem aptly makes clear the various distinction and subtleties involved in an Action predicate, viz whether the action is intentional, whether it is accidental- and if so how much negligence is involved; whether (in)action could be prevented/ executed differently for different outcomes etc; it does not offer much insight on how to evaluate Outcome Predicate or the Intention Predicates.

In the Trolley Problem, while the intentional vs accidental difference may guide our intuition regarding good and evil , in case of positive or negative outcomes; the careful versus careless (negligent) action guides our intuitions regarding the normal day-to-day good and bad acts. Here a distinction must be made between Evil (intentionally bad outcome) versus Bad acts(accidental or negligent bad outcome).One can even make a distinction between Good acts (performed with good intentions) versus Lucky acts (accidental good outcomes, maybe due to fortuitous care exhibited). Thus, a child playing on an unused track may juts be a ‘bad’ child; but five guilty men tied on tracks (even by a mad philosopher) are an ‘evil’ lot. Our intuitions, thus , would be different in the two cases and would not necessarily be determined by utilitarian concerns like number of lives.

Some formulations of the airplane problem, on the other hand , relate to quick assessment of people and situations and whether to trust or be suspicious. The problem is complicated by the fact that should the doctor invest time in gathering more data/ confirming/rejecting her suspicion versus acting quickly and potentially aggravating the situation/ long-term outcome. These formulations and our intuitive answers may tell us more about the intention predicates we normally use. Whether we intend to be trusting, innocent and trustworthy or suspicious, cautious and careful. If cautious and careful, how much assessment/ fact gathering we must first resort to to arrive at the correct decision, before committing to single-minded and careful action.
Should we juts look at the past for arriving at a decision, or should we also predict the future and take that into account? If we do predict the Outcomes, then the Consequence predicate is long-term or short-term? Is it an optimistic or a worst-case outcome scenario?

There are no easy answers. But neither is the grammar of any language supposed to be easy. Constructing valid and moral sentences as per a universal moral grammar should be an equally developmentally demanding task.

Abstract vs Concrete: the two genders?( the catogorization debate)

In my previous posts I have focussed on distinctions in cognitive styles based on figure-ground, linear-parallel, routine-novel and literal-metaphorical emphasis.

There is another important dimension on which cognitive styles differ and I think this difference is of a different dimension and mechanism than the figure-ground difference that involves broader and looser associations (more context) vs narrow and intense associations (more focus). One can characterize the figure-ground differences as being detail and part-oriented vs big picture orientation and more broadly as analytical vs synthesizing style.

The other important difference pertains to whether associations and hence knowledge is mediated by abstract entities or whether associations, knowledge and behavior is grounded in concrete entities/experiences. One could summarize this as follows: whether the cognitive style is characterized by abstraction or whether it is characterized by a particularization bias. One could even go a step further and pit an algorithmic learning mechanism with one based on heuristics and pragmatics.

It is my contention that the bias towards abstraction would be greater for Males and the left hemisphere and the bias towards Particularization would be greater for Females and the right hemisphere.

Before I elaborate on my thesis, the readers of this blog need to get familiar with the literature on categorization and the different categorization/concept formation/ knowledge formation theories.

An excellent resource is a four article series from Mixing Memory. I’ll briefly summarize each post below, but you are strongly advised to read the original posts.

Background: Most of the categorization efforts are focussed on classifying and categorizing objects, as opposed to relations or activities, and the representation of such categories (concepts) in the brain. Objects are supposed to be made up of a number of features . An object may have a feature to varying degrees (its not necessarily a binary has/doesn’t has type of association, one feature may be tall and the feature strength may vary depending on the actual height)

The first post is regarding classical view of concepts as being definitional or rule-bound in nature. This view proposes that a category is defined by a combination of features and these features are of binary nature (one either has a feature or does not have it). Only those objects that have all the features of the category, belong to a category. The concept (representation of category) can be stored as a conjunction rule. Thus, concept of bachelor may be defined as having features Male, single, human and adult. To determine the classification of a novel object, say, Sandeep Gautam, one would subject that object to the bachelor category rule and calculate the truth value. If all the conditions are satisfied (i.e. Sandeep Gautam has all the features that define the category bachelor), then we may classify the new object as belonging to that category.

Thus,

Bachelor(x)= truth value of (male(x))AND(adult(x))AND(single(x))AND(human(x))

Thus a concept is nothing but a definitional rule.

The second and third posts are regarding the similarity-based approaches to categorization. These may also be called the clustering approaches. One visualizes the objects as spread in a multi-dimensional feature space, with each dimension representing the various degrees to which the feature is present. The objects in this n-dim space, which are close to each other, and are clustered together, are considered to form one category as they would have similar values of features. In these views, the distance between objects in this n-dim feature space, represents their degree of similarity. Thus, the closer the objects are the more likely that they are similar and the moire likely that we can label them as belonging to one category.

To take an example, consider a 3-dim space with one dimension (x) signifying height, the other (y) signifying color, and the third (z) signifying attractiveness . Suppose, we rate many Males along these dimensions and plot them on this 3-d space. Then we may find that some males have high values of height(Tall), color(Dark) and attractiveness(Handsome) and cluster in the 3-d space in the right-upper quadrant and thus define a category of Males that can be characterized as the TDH/cool hunk category(a category that is most common in the Mills and Boons novels). Other males may meanwhile cluster around a category that is labeled squats.

Their are some more complexities involved, like assigning weights to a feature in relation to a category, and thus skewing the similarity-distance relationship by making it dependent on the weights (or importance) of the feature to the category under consideration. In simpler terms, not all dimensions are equal , and the distance between two objects to classify them as similar (belonging to a cluster) may differ based on the dimension under consideration.

There are two variations to the similarity based or clustering approaches. Both have a similar classification and categorization mechanism, but differ in the representation of the category (concept). The category, it is to be recalled, in both cases is determined by the various objects that have clustered together. Thus, a category is a collection or set of such similar object. The differences arise in the representation of that set.

One can represent a set of data by its central tendencies. Some such central tendencies, like Mean Value, represent an average value of the set, and are an abstraction in the sense that no particular member may have that particular value. Others like Mode or Median , do signify a single member of that set, which is either the most frequent one or the middle one in an ordered list. When the discussion of central tendencies is extended to pairs or triplets of values, or to n-tuples (signifying n dim feature space) , then the concept of mode or median becomes more problematic, and a measure based on them, may also become abstract and no longer remain concrete.

The other central tendencies that one needs are an idea of the distribution of the set values. With Mean, we also have an associated Variance, again an abstract parameter, that signifies how much the set value are spread around the Mean. In the case of Median, one can resort to percentile values (10th percentile etc) and thus have concrete members as representing the variance of the data set.

It is my contention that the prototype theories rely on abstraction and averaging of data to represent the data set (categories), while the Exemplar theories rely on particularization and representativeness of some member values to represent the entire data set.

Thus, supposing that in the above TDH Male classification task, we had 100 males belonging to the TDH category, then a prototype theory would store the average values of height, color and attractiveness for the entire 100 TDH category members as representing the TDH male category.

On the other hand, an exemplar theory would store the particular values for the height, color and attractiveness ratings of 3 or 4 Males belonging to the TDH category as representing the TDH category. These 3 or 4 members of the set, would be chosen on their representativeness of the data set (Median values, outliers capturing variance etc).

Thus, the second post of Mixing Memory discusses the Prototype theories of categorization, which posits that we store average values of a category set to represent that category.

Thus,

Similarity will be determined by a feature match in which the feature weights figure into the similarity calculation, with more salient or frequent features contributing more to similarity. The similarity calculation might be described by an equation like the following:

Sj = Si (wi.v(i,j))

In this equation, Sj represents the similarity of exemplar j to a prototype, wi represents the weight of feature i, and v(i,j) represents the degree to which exemplar j exhibits feature i. Exemplars that reach a required level of similarity with the prototype will be classified as members of the category, and those fail to reach that level will not.

The third post discusses the Exemplar theory of categorization , which posits that we store all, or in more milder and practical versions, some members as exemplars that represent the category. Thus, a category is defined by a set of typical exemplars (say every tenth percentile).

To categorize a new object, one would compare the similarity of that object with all the exemplars belonging to that category, and if this reaches a threshold, the new object is classified as belonging to the new category. If two categories are involved, one would compare with exemplars from both the categories, and depending on threshold values either classify in both categories , or in a forced single-choice task, classify in the category which yields better similarity scores.

Thus,

We encounter an exemplar, and to categorize it, we compare it to all (or some subset) of the stored exemplars for categories that meet some initial similarity requirement. The comparison is generally considered to be between features, which are usually represented in a multidimensional space defined by various “psychological” dimensions (on which the values of particular features vary). Some features are more salient, or relevant, than others, and are thus given more attention and weight during the comparison. Thus, we can use an equation like the following to determine the similarity of an exemplar:

dist(s, m) = åiai|yistimymiex|

Here, the distance in the space between an instance, s, and an exemplar in memory, m, is equal to the sum of the values of the feature of m on all of dimensions (represented individually by i) subtracted from the feature value of the stimulus on the same dimensions. The sum is weighted by a, which represents the saliency of the particular features.

There is another interesting clustering approach that becomes available to us, if we use an exemplar model. This is the proximity-based approach. In this, we determine all the exemplars (of different categories) that are lying in a similarity radius (proximity) around the object in consideration. Then we determine the category to which these exemplars belong. The category to which the maximum number of these proximate exemplars belong, is the category to which this new object is classified.

The fourth post on Mixing Memory deals with a ‘theory’ theory approach to categorization, and I will not discuss it in detail right now.

I’ll like to mention briefly in passing that there are other relevant theories like schemata , scripts, frames and situated simulation theories of concept formation that take into account prior knowledge and context to form concepts.

However, for now, I’ll like to return to the prototype and exemplar theories and draw attention to the fact that the prototype theories are more abstracted, rule-type and economical in nature, but also subject to pragmatic deficiencies, based on their inability to take variance, outliers and exceptions into account; while the exemplar theories being more concrete, memory-based and pragmatic in nature (being able to account for atypical members) suffer from the problems of requiring large storage/ unnecessary redundancy. One may even extrapolate these differences as the one underlying procedural or implicit memory and the ones underlying explicit or episodic memory.


There is a lot of literature on prototypes and exemplars and research supporting the same. One such research is in the case of Visual perception of faces, whereby it is posited that we find average faces attractive , as the average face is closer to a prototype of a face, and thus, the similarity calculation needed to classify an average face are minimal. This ease of processing, we may subjectively feel as attractiveness of the face. Of course, male and female prototype faces would be different, both perceived as attractive.


Alternately, we may be storing examples of faces, some attractive, some unattractive and one can theorize that we may find even the unattractive faces very fast to recognize/categorize.

With this in mind I will like to draw attention to a recent study that highlighted the past-tense over-regularization in males and females and showed that not only do females make more over-regularization errors, but also these errors are distributed around similar sounding verbs.

Let me explain what over-regularization of past-tense means. While the children are developing, they pick up language and start forming the concepts like that of a verb and that of a past tense verb. They sort of develop a folk theory of how past tense verbs are formed- the theory is that the past tense is formed by appending an ‘ed’ to a verb. Thus, when they encounter a new verb, that they have to use in past tense (and which say is irregular) , then they will tend to append ‘ed’ to the verb to make the past tense. Thus, instead of learning that ‘hold’, in past tense becomes ‘held’, they tend to make the past tense as ‘holded’.

Prototype theories suggest, that they have a prototypical concept of a past tense verb as having two features- one that it is a verb (signifies action) and second that it has ‘ed’ in the end.

Exemplar theories on the other hand, might predict, that the past tense verb category is a set of exemplars, with the exemplars representing one type of similar sounding verbs (based on rhyme, last coda same etc). Thus, the past tense verb category would contain some actual past tense verbs like { ‘linked’ representing sinked, blinked, honked, yanked etc; ‘folded’ representing molded, scolded etc}.

Thus, this past tense verb concept, which is based on regular verbs, is also applied while determining the past tense of irregular verb. On encountering ‘hold’ an irregular verb, that one wants to use in the past tense, one may use ‘holded’ as ‘holded’ is both a verb, ends in ‘ed’ and is also very similar to ‘folded’. While comparing ‘hold’ with a prototype, one may not have the additional effect of rhyming similarity with exemplars, that is present in the exemplar case; and thus, females who are supposed to use an exemplar system predominantly, would be more susceptible to over-regularization effects as opposed to boys. Also, this over-regularization would be skewed, with more over-regularization for similar rhyming regular verbs in females. As opposed to this, boys, who are usinbg the prototype system predominantly, would not show the skew-towards-rhyming-verbs effect. This is precisely what has been observed in that study.

Developing Intelligence has also commented on the same, though he seems unconvinced by the symbolic rules-words or procedural-declarative accounts of language as opposed to the traditional confectionist models. The account given by the authors, is entirely in terms of procedural (grammatical rule based) versus declarative (lexicon and pairs of past and present tense verb based) mechanism, and I have taken the liberty to reframe that in terms of Prototype versus Exemplar theories, because it is my contention that Procedural learning , in its early stages is prototypical and abstractive in nature, while lexicon-based learning is exemplar and particularizing in nature.

This has already become a sufficiently long post, so I will not take much space now. I will return to this discussion, discussing research on prototype Vs exemplars in other fields of psychology especially with reference to Gender and Hemisphericality based differences. I’ll finally extend the discussion to categorization of relations and that should move us into a whole new filed, that which is closely related to social psychology and which I believe has been ignored a lot in cognitive accounts of learning, thinking etc.

Zombies, AI and Temporal Lobe Epilepsy : towards a universal consciousness and behavioral grammar?

I was recently reading an article on Zombies about how the Zombie argument has been used against physicalism and in consciousness debates in general, and one quote by Descartes at the beginning of the article captured my attention :

Descartes held that non-human animals are automata: their behavior is explicable wholly in terms of physical mechanisms. He explored the idea of a machine which looked and behaved like a human being. Knowing only seventeenth century technology, he thought two things would unmask such a machine: it could not use language creatively rather than producing stereotyped responses, and it could not produce appropriate non-verbal behavior in arbitrarily various situations (Discourse V). For him, therefore, no machine could behave like a human being. (emphasis mine)



To me this seems like a very reasonable and important speculation: although we have learned a lot about how we are able to generate an infinite variety of creative sentences using the generative grammar theory of Chomsky (I must qualify, we only know how to create a new grammatically valid sentence-the study of semantics has not complimented the study in syntax – so we still do not know why we are also able to create meaningful sentences and not just grammatically correct gibberish like “Colorless green ideas flow furiously” : the fact that this grammatically correct sentence is still interpretable by using polysemy , homonymy or metaphorical sense for ‘colorless’, ‘green’ etc may provide the clue for how we map meanings -the conceptual Metaphor Theory- but that discussion is for another day), we still do not have a coherent theory of how and why we are able to produce a variety of behavioral responses in arbitrarily various situations.

If we stick to a physical, brain-based, reductionist, no ghost-in-the-machine, evolved-as-opposed-to-created view of human behavior, then it seems reasonable that we start from the premise of humans as an improvement over the animal models of stimulus-response (classical conditioning) or response-reinforcement (operant conditioning) theories of behavior and build upon them to explain how and what mechanism Humans have evolved to provide a behavioral flexibility as varied, creative and generative as the capacity for grammatically correct language generation. The discussions of behavioral coherence, meaningfulness, appropriateness and integrity can be left for another day, but the questions of behavioral flexibility and creativity need to be addressed and resolved now.

I’ll start with emphasizing the importance of response-reinforcement type of mechanism and circuitry. Unfortunately most of the work I am familiar with regarding the modeling of human brain/mind/behavior using Neural Networks focuses on the connectionist model with the implicit assumption that all response is stimulus driven and one only needs to train the network and using feedback associate a correct response with a stimulus. Thus, we have an input layer for collecting or modeling sensory input, a hidden association layer and an output layer that can be considered as a motor effector system. This dissociation of input acuity, sensitivity representation in the form of input layer ; output variability and specificity in the form of an output layer; and one or more hidden layers that associate input with output and may be construed as an association layer maps very well to our intuitions of a sensory system, a motor system and an association system in the brain to generate behavior relevant to external stimuli/situations. However, this is simplistic in the sense that it is based solely on stimulus-response types of associations (the classical conditioning) and ignores the other relevant type of association response-reinforcement. Let me clarify that I am not implying that neural networks models are behavioristic: in the form of hidden layers they leave enough room for cognitive phenomenon, the contention is that they not take into account the operant conditioning mechanisms. Here it is instructive to note that feedback during training is not equivalent to operant-reinforcement learning: the feedback is necessary to strengthen the stimulus-response associations; the feedback only indicates that a particular response triggered by the particular stimuli was correct.

For operant learning to take place, the behavior has to be spontaneously generated and based on the history of its reinforcement its probability of occurrence manipulated. This takes us to an apparently hard problem of how behavior can be spontaneously generated. All our life we have equated reductionism and physicalism with determinism, so a plea to spontaneous behavior seems almost like begging for a ghost-in-the-machine. Yet on careful thinking the problem of spontaneity (behavior in absence of stimulus) is not that problematic. One could have a random number generator and code for random responses as triggered by that random number generator. One would claim that introducing randomness in no way gives us ‘free will’, but that is a different argument. What we are concerned with is spontaneous action, and not necessarily, ‘free’ or ‘willed’ action.

To keep things simple, consider a periodic oscillator in your neural network. Lets us say it has a duration of 12 hours and it takes 12 hours to complete one oscillation (i.e. it is a simple inductor-capacitor pair and it takes 6 hours for capacitor to discharge and another 6 hours for it to recharge) ; now we can make connections a priori between this 12 hr clock in the hidden layer and one of the outputs in the output layer that gets activated whenever the capacitor has fully discharged i.e. at a periodic interval of 12 hours. Suppose that this output response is labeled ‘eat’. Thus we have coded in our neural networks a spontaneous mechanism by which it ‘eats’ at 12 hour durations.

Till now we haven’t really trained our neural net, and moreover we have assumed a circuitry like a periodic oscillator in the beginning itself, so you may object to this saying this is not how our brain works. But let us be reminded that just like normal neurons in the brain which form a model for neurons in the neural network, there is also a suprachiasmatic nuclei that gives rise to circadian rhythms and implements a periodic clock.

As for training, one can assume the existence of just one periodic clock of small granularity, say 1 second duration in the system, and then using accumulators that code for how many ticks have elapsed since past trigger, one can code for any arbitrary periodic response of greater than one second granularity. Moreover, one need not code for such accumulators: they would arise automatically out of training from the other neurons connected to this ‘clock’ and lying between the clock and the output layer. Suppose, that initially, to an output marked ‘eat’ a one second clock output is connected (via intervening hidden neuron units) . Now, we have feedback in this system also. Suppose, that while training, we provide positive feedback only on 60*60*12 trials (and all its multiples) and provide negative feedback on all other trials, it is not inconceivable to believe that an accumulator neural unit would get formed in the hidden layer and count the number of ticks that come out of the clock: it would send the trigger to output layer only on every 60*60*12 th trial and suppress the output of the clock on every other trial. Viola! We now have a 12 hour clock (which is implemented digitally using counting ticks) inside our neural network coding for a 12 hour periodic response. We just needed to have one ‘innate’ clock mechanism and using that and the facts of ‘operant conditioning’ or ‘response-reinforcement’ pairing we can create an arbitrary number of such clocks in our body/brain. Also, please notice the fact, that we need just one 12 hour clock, but can flexibly code for many different 12 hour periodic behaviors. Thus, if the ‘count’ in accumulator is zero, we ‘eat’; if the count is midway between 0 and 60*60*12, we ‘sleep’. Thus, though both eating and sleeping follow a 12 hour cycle, they do not occur concurrently, but are separated by a 6 hour gap.

Suppose further, that one reinforcement that one is constantly exposed to and that one uses for training the clock is ‘sunlight’. The circadian clock is reinforced, say only by the reinforcement provided by getting exposed to the mid noon sun, and by no other reinforcements. Then, we have a mechanism in place for the external tuning of our internal clocks to a 24 hour circadian rhythm. It is conceivable, that for training other periodic operant actions, one need not depend on external reinforcement or feedback, but may implement an internal reinforcement mechanism. To make my point clear, while ‘eat’ action, i.e. a voluntary operant action, may get generated randomly initially, and in the traditional sense of reinforcement, be accompanied by intake of food, which in the classical sense of the word is a ‘reinforcement’; the intake of food, which is part-and-parcel of the ‘eat’ action should not be treated as the ‘feedback’ that is required during training of the clock. During the training phase, though the operant may be activated at different times (and by the consequent intake of food be intrinsically reinforced) , the feedback should be positive only for the operant activations inline with the periodic training i.e. only on trials on which the operant is produces as per the periodic training requirement; and for all other trails negative feedback should be provided. After the training period, not only would operant ‘eat’ be associated with a reinforcement ‘food’: it would also occur as per a certain rhythm and periodicity. The goal of training here is not to associate a stimulus with a response ( (not the usual neural networks association learning) , but to associate a operant (response) with a schedule(or a concept of ‘time’). Its not that revolutionary a concept, I hope: after all an association of a stimulus (or ‘space’) with response per se is meaningless; it is meaningful only in the sense that the response is reinforced in the presence of the stimulus and the presence of the stimulus provides us a clue to indulge in a behavior that would result in a reinforcement. On similar lines, an association of a response with a schedule may seem arbitrary and meaningless; it is meaningful in the sense that the response is reinforced in the presence of a scheduled time/event and the occurrence of the scheduled time/event provides us with a reliable clue to indulge in a behavior that would result in reinforcement.

To clarify, by way of an example, ‘shouting’ may be considered as a response that is normally reinforcing, because of say its being cathartic in nature . Now, ‘shouting’ on seeing your spouse”s lousy behavior may have had a history of reinforcement and you may have a strong association between seeing ‘spouse’s lousy behavior’ and ‘shouting’. You thus have a stimulus-response pair. why you don’t shout always, or while say the stimuli is your ‘Boss’s lousy behavior’, is because in those stimulus conditions, the response ‘shouting’, though still cathartic, may have severe negative costs associated, and hence in those situations it is not really reinforced. Hence, the need for an association between ‘spouse lousy behavior’ and ‘shouting’ : only in the specific stimulus presence is shouting reinforcing and not in all cases.

Take another example that of ‘eating’, which again can be considered to be a normally rewarding and reinforcing response as it provides us with nutrition. Now, ‘eating’ 2 or 3 times in a day may be rewarding; but say eating all the time, or only on 108 hours periodicity may not be that reinforcing a response, because that schedule does not take care of our body requirements. While eating on a 108 hours periodicity would impose severe costs on us in terms of under nutrition and survival, eating on 2 mins periodicity too would not be that reinforcing. Thus, the idea of training of spontaneous behaviors as per a schedule is not that problematic.

Having taken a long diversion, arguing for a case for ‘operant conditioning’ based training of neural networks, let me come to my main point.

While ‘stimulus’ and the input layer represent the external ‘situation’ that the organism is facing, the network comprising of the clocks and accumulators represent the internal state and ‘needs’ of the organism. One may even claim, a bit boldly, that they represent the goals or motivations of the organism.

A ‘eat’ clock that is about to trigger a ‘eat’ response, may represent a need to eat. This clock need not be a digital clock, and only when the 12 hour cycle is completed to the dot, an ‘eating’ act triggered. Rather, this would be a probabilistic, analog clock, with the ‘probability’ of eating response getting higher as the 12 hour cycle is coming to an end and the clock being rest, whenever the eating response happens. If the clock is in the early phases of the cycle (just after an eating response) then the need for eating (hunger) is less; when the clock is in the last phases of the cycle the hunger need is strong and would likely make the ‘eating’ action more and more probable.

Again, this response-reinforcement system need not be isolated from the stimulus-response system. Say, one sees the stimulus ‘food’, and the hunger clock is still showing ‘medium hungry’. The partial activation of the ‘eat’ action (other actions like ‘throw the food’, ignore the food, may also be activated) as a result of seeing the stimulus ‘food’ may win over other competing responses to the stimuli, as the hunger clock is still activating a medium probability of ‘hunger’ activation and hence one may end up acting ‘eat’. This however, may reset the hunger clock and now a second ‘food’ stimulus may not be able to trigger ‘eat’ response as the activation of ‘eat’ due to ‘hunger clock’ is minimal and other competing actions may win over ‘eat’.

To illustrate the interaction between stimulus-response and response-reinforcement in another way, on seeing a written word ‘hunger’ as stimulus, one consequence of that stimulus could be to manipulate the internal ‘hunger clock’ so that its need for food is increased. this would be simple operation of increasing the clock count or making the ‘need for hunger’ stronger and thus increasing the probability of occurrence of ‘eat’ action.

I’ll also like to take a leap here and equate ‘needs’ with goals and motivations. Thus, some of the most motivating factors for humans like food, sex, sleep etc can be explained in terms of underlying needs or drives (which seem to be periodic in nature) and it is also interesting to note that many of them do have cycles associated with them and we have sleep cycles or eating cycles and also the fact that many times these cycles are linked with each other or the circadian rhythm and if the clock goes haywire it has multiple linked effects affecting all the motivational ‘needs’ spectrum. In a mainc pahse one would have low needs to sleep, eat etc, while the opposite may be true in depression.

That brings me finally to Marvin Minsky and his AI attempts to code for human behavioral complexity.

In his analysis of the levels of mental activity, he starts with the traditional if, then rule and then refines it to include both situations and goals in the if part.


To me this seems intuitively appealing: One needs to take into account not only the external ‘situation’, but also the internal ‘goals’ and then come up with a set of possible actions and maybe a single action that is an outcome of the combined ‘situation’ and ‘goals’ input.

However, Minsky does not think that simple if-then rules, even when they take ‘gaols’ into consideration would suffice, so he posits if-then-result rules.

To me it is not clear how introducing a result clause makes any difference: Both goals and stimulus may lead to multiple if-then rule matches and multiple actions activation. These action activations are nothing but what Minsky has clubbed in the result clause and we still have the hard problem of given a set of clauses, how do we choose one of them over other.

Minsky has evidently thought about this and says:

What happens when your situation matches the Ifs of several different rules? Then you’ll need some way to choose among them. One policy might arrange those rules in some order of priority. Another way would be to use the rule that has worked for you most recently. Yet another way would be to choose rules probabilistically.

To me this seems not a problem of choosing which rule to use, but that of choosing which response to choose given several possible responses as a result of application of several rules to this situation/ goal combination. It is tempting to assume that the ‘needs’ or ‘gaols’ would be able to uniquely determine the response given ambiguous or competing responses to a stimulus; yet I can imagine a scenario where the ‘needs’ of the body do not provide a reliable clue and one may need the algorithms/heuristics suggested by Minsky to resolve conflicts. Thus, I see the utility of if-then-result rules: we need a representation of not only the if part (goals/ stimulus) in the rule; which tells us what is the set of possible actions that can be triggered by this stimulus/ situation/ needs combo; but also a representation of the results part of the rule: which tells us what reinforcement values these response(actions) have for us and use this value-response association to resolve the conflict and choose one response over the other. This response-value association seems very much like the operant-reinforcement association, so I am tempted once more to believe that the value one ascribes to a response may change with bodily needs and rather is reflective of bodily needs, but I’ll leave that assumption for now and instead assume that somehow we do have different priorities assigned to the responses ( and not rules as Minsky had originally proposed) and do the selection on the basis of those priorities.

Though I have posited a single priority-based probabilistic selection of response, it is possible that a variety of selection mechanisms and algorithms are used and are activated selectively based on the problem at hand.

This brings me to the critic-selector model of mind by Minsky. As per this model, one needs both critical thinking and problem solving abilities to act adaptively. One need not just be good at solving problems- one also has to to understand and frame the right problem and then use the problem solving approach that is best suited to the problem.


Thus, the first task is to recognize a problem type correctly. After recognising a problem correctly, we may apply different selctors or problem solving strategies to different problems.

He also posits that most of our problem solving is analogical and not logical. Thus, the recognizing problem is more like recognizing a past analogical problem; and the selecting is then applying the methods that worked in that case onto this problem.

How does that relate to our discussions of behavioral flexibility? I believe that every time we are presented with a stimulus or have to decide how to behave in response to that stimulus, we are faced with a problem- that of choosing one response over all others. We need to activate a selection mechanism and that selection mechanism may differ based on the critics we have used to define the problem. If the selection mechanism was fixed and hard-wired then we wont have the behavioral flexibility. Because the selection mechanism may differ based on our framing of the problem in terms of the appropriate critics, hence our behavioral response may be varied and flexible. At times, we may use the selector that takes into account only the priorities of different responses in terms of the needs of the body; at other times the selector may be guided by different selection mechanisms that involve emotions and values us the driving factors.

Minsky has also built a hierarchy of critics-selector associations and I will discuss them in the context of developmental unfolding in a subsequent post. For now, it is sufficient to note that different types of selection mechanisms would be required to narrow the response set, under different critical appraisal of the initial problem.

To recap, a stimulus may trigger different responses simultaneously and a selection mechanism would be involved that would select the appropriate response based on the values associated with the response and the selection algorithm that has been activated based on our appraisal of the reason for conflicting and competing responses. while critics help us formulate the reason for multiple responses to the same stimuli, the selector helps us to apply different selection strategies to the response set, based on what selection strategy had worked on an earlier problem that involved analogous critics.

One can further dissociate this into two processes: one that is grammar-based, syntactical and uses the rules for generating a valid behavioral action based on the critic and selector predicates and the particular response sets and strategies that make up the critic and selector clause respectively. By combining and recombining the different critics and selectors one can make an infinite rules of how to respond to a given situation. Each such rule application may potentially lead to different action. The other process is that of semantics and how the critics are mapped onto the response sets and how selectors are mapped onto different value preferences.

Returning back to the response selection, given a stimulus, clearly there are two processes at work : one that uses the stored if-then rules (the stimulus-response associations) to make available to us a set of all actions that are a valid response to the situation; and the other that uses the then-result rules (and the response-value associations, that I believe are dynamic in nature and keep changing) to choose one of the response from that set as per the ‘subjective’ value that it prefers at the moment. This may be the foundation for the ‘memory’ and ‘attention’ dissociations in working memory abilities used in stroop task and it it tempting to think that the while DLPFC and the executive centers determine the set of all possible actions (utilizing memory) given a particular situation, the ACC selects the competing responses based on the values associated and by selectively directing attention to the selected response/stimuli/rule.

Also, it seems evident that one way to increase adaptive responses would be to become proficient in discriminating stimuli and perceiving the subjective world accurately; the other way would be to become more and more proficient in directing attention to a particular stimulus/ response over others and directing attention to our internal representations of them so that we can discriminate between the different responses that are available and choose between them based on an accurate assessment of our current needs/ goals.

This takes me finally to the two types of consciousness that Hughlings-Jackson had proposed: subject consciousness and object consciousness.


Using his ideas of sensorimotor function, Hughlings-Jackson described two “halves” of consciousness, a subject half (representations of sensory function) and an object half (representations of motor function). To describe subject consciousness, he used the example of sensory representations when visualizing an object . The object is initially perceived at all sensory levels. This produced a sensory representation of the object at all sensory levels. The next day, one can think of the object and have a mental idea of it, without actually seeing the object. This mental representation is the sensory or subject consciousness for the object, based on the stored sensory information of the initial perception of it.

What enables one to think of the object? This is the other half of consciousness, the motor side of consciousness, which Hughlings-Jackson termed “object consciousness.” Object consciousness is the faculty of “calling up” mental images into consciousness, the mental ability to direct attention to aspects of subject consciousness. Hughlings-Jackson related subject and object consciousness as follows:

The substrata of consciousness are double, as we might infer from the physical duality and separateness of the highest nervous centres. The more correct expression is that there are two extremes. At the one extreme the substrata serve in subject consciousness. But it is convenient to use the word “double.”

Hughlings-Jackson saw the two halves of consciousness as constantly interacting with each other, the subjective half providing a store of mental representations of information that the objective half used to interact with the environment.

Further,

The term “subjective” answers to what is physically the effect of the environment on the organism; the term “objective” to what is physically the reacting of the organism on the environment.

Hughlings-Jackson’s concept of subjective consciousness is akin to the if-then representation of mental rules.One needs to perceive the stimuli as clearly as possible and to represent them along with their associated actions so that an appropriate response set can be activated to respond to the environment. His object consciousness is the attentional mechanism that is needed to narrow down the options and focus on those mental representations and responses that are to be selected and used for interacting with the environment.

As per him, subject and object consciousness arise form a need to represent the sensations (stimuli) and movements (responses) respectively and this need is apparent if our stimulus-response and response-reinforcement mappings have to be taken into account for determining appropriate action.

All nervous centres represent or re-represent impressions and movements. The highest centres are those which form the anatomical substrata of consciousness, and they differ from the lower centres in compound degree only. They represent over again, but in more numerous combinations, in greater complexity, specialty, and multiplicity of associations, the very same impressions and movements which the lower, and through them the lowest, centres represent.

He had postulated that temporal lobe epilepsy involves a loss in objective consciousness (leading to automatic movements as opposed to voluntary movements that are as per a schedule and do not happen continuously) and a increase in subjective consciousness ( leading to feelings like deja-vu or over-consciousness in which every stimuli seems familiar and triggers the same response set and nothing seems novel – the dreamy state). These he described as the positive and negative symptoms or deficits associated with an epileptic episode.

It is interesting to note that one of the positive symptom he describes of epilepsy, that is associated with subjective consciousness of third degree, is ‘Mania’ : the same label that Minsky uses for a Critic in his sixth self-consciousness thinking level of thinking. The critic Minsky lists is :

Self-Conscious Critics. Some assessments may even affect one’s current image of oneself, and this can affect one’s overall state:

None of my goals seem valuable. (Depression.)
I’m losing track of what I am doing. (Confusion.)

I can achieve any goal I like! (Mania.)
I could lose my job if I fail at this. (Anxiety.)

Would my friends approve of this? (Insecurity.)

Interesting to note that this Critic or subjective appraisal of the problem in terms of Mania can lead to a subjective consciousness that is characterized as Mania.

If Hughlings-Jackson has been able to study epilepsy correctly and has been able to make some valid inferences, then this may tell us a lot about how we respond flexibly to novel/ familiar situations and how the internal complexity that is required to ensure flexible behavior, leads to representational needs in brain, that might lead to the necessity of consciousness.

Incongruence perception and linguistic specificity: a case for a non-verbal stroop test

In a follow up to my last post on color memory and how it affects actual color perception, I would like to highlight a classical psychological study by Bruner and Postaman, that showed that even for non-natural artifacts like suits in a playing card deck, our expectation of the normal color or shape of a suit, affects our perception of a stimuli that is incongruent to our expectations.

In a nutshell, in this study incongruent stimuli like a red spade card or a black heart card was presented for brief durations and the subjects asked to identify the stimuli completely – the form or shape (heart/spade/club/diamond), the color (red/black) and the number( 1..10…face cards were not used) of the stimuli.

The trial used both congruent ( for eg a red heart, a black club) as well as incongruent stimuli (a black heart, a red spade).

To me this appears to be a form of stroop task , in which, if one assumes that form is a more salient stimulus than color, then a presentation of a spade figure would automatically activate the black color perception and the prepotent color naming response would be black, despite the fact that the spade was presented in red color. This prepotent ‘black’ verbal response would, as per standard stroop effect explanations, be inhibited for the successful ‘red’ verbal response to happen. I am making an analogy here that the form of a suit is equivalent to the linguistic color-term and that this triggers a prepotent response.

In these lights, the results of the experiment do seem to suggest a stroop effect in this playing-deck task, with subjects taking more trials to recognize incongruent stimuli as compared to congruent stimuli.

Perhaps the most central finding is that the recognition threshold for the incongruous playing cards (whose with suit and color reversed) is significantly higher than the threshold for normal cards. While normal cards on the average were recognized correctly — here defined as a correct response followed by a second correct response — at 28 milliseconds, the incongruous cards required 114 milliseconds. The difference, representing a fourfold increase in threshold, is highly significant statistically, t being 3.76 (confidence level < .01).



Further interesting is the fact that this incongruence threshold decreases if one or more incongruent trials precede the incongruent trial in question; or increases if the preceding trials are with normal cards. This is inline with current theories of stroop effect as involving both memory and attention, whereby the active maintenance of the goal (ignore form and focus on color while naming color) affects performance on all trials and also affects the errors , while the attentional mechanism to resolve incongruence affects only reaction times (and leads to RT interference).

As in the playing card study, no reaction time measures were taken, but only the threshold reached to correctly recognize the stimuli were used, so we don’t have any RT measures, but a big threshold is indicative of and roughly equal to an error on a trial. The higher thresholds on incongruent trial means that the errors on incongruent trial were more than on congruent trials. The increase in threshold , when normal card precede and a decrease when incongruent cards precede is analogous to the high-congruency and low-congruency trials described in Kane and Engel study and analyzed in my previous posts as well as in a Developing Intelligence post. It is intuitive to note that when incongruent trials precede, then the goal (ignore form and focus on color while naming color) becomes more salient; when normal cards precede one may have RT facilitation and the (implicit) goal to ignore color may become less salient.

Experience with an incongruity is effective in so far as it modifies the set of the subject to prepare him for incongruity. To take an example, the threshold recognition time for incongruous cards presented before the subject has had anything else in the tachistoscope — normal or incongruous — is 360 milliseconds. If he has had experience in the recognition of one or more normal cards before being presented an incongruous stimulus, the threshold rises slightly but insignificantly to 420 milliseconds. Prior experience with normal cards does not lead to better recognition performance with incongruous cards (see attached Table ). If, however, an observer has had to recognize one incongruous card, the threshold for the next trick card he is presented drops to 230 milliseconds. And if, finally, the incongruous card comes after experience with two or three previously exposed trick cards, threshold drops still further to 84 milliseconds.

Thus clearly the goal maintenance part of stroop effect is clearly in play in the playing-card task and affects the threshold for correct recognition.

The second part of explanation of stroop task is usually based on directed inhibition and an attentional process that inhibits the perpotent response. This effect comes into play only on incongruent trials. An alternate explanation is that their is increased competition of competing representations on incongruent trials and instead of any top-down directed inhibition, inline with the goal/expectation, their is only localized inhibition. The dissociation of a top-down goal maintenance mechanism ad another attentional selection mechanism seems to be more inline with the new model, wherein inhibition is local and not top-directed.

While RT measures are not available it is intersecting to take a look at some of the qualitative data that supports a local inhibition and attentional mechanism involved in reacting to incongruent stimuli. The authors present evidence that the normal course of responses that are generated by the subjects for (incongruent) stimuli is dominance, compromise, disruption and finally recognition.

Generally speaking, there appear to be four kinds of reaction to rapidly presented incongruities. The first of these we have called the dominance reaction. It consists, essentially, of a “perceptual denial” of the incongruous elements in the stimulus pattern. Faced with a red six of spades, for example, a subject may report with considerable assurance, “the six of spades” or the “six of hearts,” depending upon whether he is color or form bound (vide infra). In the one case the form dominates and the color is assimilated to it; in the other the stimulus color dominates and form is assimilated to it. In both instances the perceptual resultant conforms with past expectations about the “normal” nature of playing cards.

A second technique of dealing with incongruous stimuli we have called compromise. In the language of Egon Brunswik , it is the perception of a Zwischengegenstand or compromise object which composes the potential conflict between two or more perceptual intentions. Three examples of color compromise: (a) the red six of spades is reported as either the purple six of hearts or the purple six of spades; (b) the black four of hearts is reported as a “grayish” four of spades; (c) the red six of clubs is seen as “the six of clubs illuminated by red light.”

A third reaction may be called disruption. A subject fails to achieve a perceptual organization at the level of coherence normally attained by him at a given exposure level. Disruption usually follows upon a period in which the subject has failed to resolve the stimulus in terms of his available perceptual expectations. He has failed to confirm any of his repertory of expectancies. Its expression tends to be somewhat bizarre: “I don’t know what the hell it is now, not even for sure whether it’s a playing card,” said one frustrated subject after an exposure well above his normal threshold.

Finally, there is recognition of incongruity, the fourth, and viewed from the experimenter’s chair, most successful reaction. It too is marked by some interesting psychological by-products, of which more in the proper place.

This sequence points towards a local inhibition mechanism in which either one of the responses is selected and dominates the other; or both the responses mix and yield to give a mixed percept —this is why a gray banana may appear yellowish—or why a banana matched to gray background by subjects may actually be made bluish—as that of a blackish red perception of suit color; or in some cases there may be frustration when the incongruent stimuli cannot be adequately reconciled with expectations- leading to disruption- in the classical stroop task this may explain the skew in RT for some incongruent trials—-some take a lot of time as maybe one has just suffered from disruption—; and finally one may respond correctly but only after a reasonable delay. This sequence is difficult to explain in terms of top-down expectation model and directed inhibition.

Finally, although we have been discussing the playing card task in terms of stroop effect, one obvious difference is striking. In the playing cards and t e pink-banana experiments the colors and forms or objects are tightly coupled- we have normally only seen a yellow banana or a red heart suit. This is not so for the printed grapheme and linguistic color terms- we have viewed then in all colors , mostly in black/gray- but the string hue association that we still have with those colors is on a supposedly higher layer of abstraction.

Thus, when an incongruent stimuli like a red heart is presented , then any of the features of the object may take prominence and induce incongruence in the other feature. For eg, we may give more salience to form and identity it as a black spade; alternately we may identify the object using color and perceive incongruence in shape- thus we may identify it as a red spade. Interestingly, both kind of errors were observed in the Bruner study. Till date, one hast not really focussed on the reverse stroop test- whereby one asks people to name the color word and ignore the actual color- this seems to be an easy task as the linguistic grapheme are not tied to any color in particular- the only exception being black hue which might be reasonably said to be associated with all grapheme (it is the most popular ink). Consistent with this, in this reverse stroop test, sometimes subjects may respond ‘black’ when watching a ‘red’ linguistic term in black ink-color. This effect would be for ‘black’ word response and black ink-color only and for no other ink color. Also, the response time for ‘black’ response may be facilitated when the ink-color is black (and the linguistic term is also ‘black’) compared to other ink-colors and other color-terms. No one has conducted such an experiment, but one can experiment and see if there is a small stroop effect involved here in the reverse direction too.

Also, another important question of prime concern is whether the stroop interference in both cases, the normal stroop test, and the playing card test, is due to a similar underlying mechanism, whereby due to past sensory (in case of playing cards) or semantic associations (in case of linguistic color terms) the color terms or forms (bananas/ suits) get associated with a hue and seeing that stimulus feature automatically activates a sensory or semantic activation of the corresponding hue. This prepotent response then competes with the response that is triggered by the actual hue of the presented stimulus and this leads to local inhibition and selection leading to stroop interference effects.

If the results of the non-verbal stroop test, comprising of natural or man-made objects, with strong color associations associated with them, results in similar results as observed in the classical stroop test, then this may be a strong argument for domain-general associationist/ connectionist models of language semantics and imply that linguistic specificity may be over hyped and at least the semantics part of language acquisition, is mostly a domain general process. On the other hand, dissimilar results on non-verbal stroop tests form the normal stroop test, may indicate that the binding of features in objects during perception; and the binding of abstract meaning to linguistic words in a language have different underlying mechanisms and their is much room for linguistic specificity. Otherwise, it is apparent that the binding of abstract meaning to terms is different a problem from that of binding of different visual features to represent and perceive an object. One may use methods and results from one field and apply them in the other.

To me this seems extremely interesting and promising. The evidence that stroop test is due to two processes – one attentional and the other goal maintenance/ memory mediated – and its replication in a non-verbal stroop tests, would essentially help us a lot by focusing research on common cognitive mechanisms underlying working memory – one dependent on memory of past associations and their active maintenance- whether verbal/abstract or visual/sensory- and the other dependent on a real-time resolution of incongruity/ambiguity by focusing attention on one response to the exclusion of the other. This may well correspond to the Gc and Gf measures of intelligence. One reflecting how good we are at handling and using existing knowledge; the other how good we are able to take into account new information and respond to novel situations. One may even extend this to the two dissociated memory mechanisms that have been observed in parahippocampal regions- one used when encountering familiar situations/stimuli and the other when encountering novel stimuli. One essentially a process of assimilation as per existing schema/ conceptual metaphors; the other a process of accommodation, involving perhaps, an appreciation/formation of novel metaphors and constructs.

Enough theorizing and speculations for now. Maybe I should act on this and make an online non-verbal stroop test instead to test my theories!

Endgame: Another interesting twist to the playing cards experiment could be in terms of motivated perception. Mixing Memory discusses another classical study by Bruner in this regard. Suppose that we manipulate motivations of people so that they are either expecting to see a heart or a red color as the next stimuli- because only this desired stimuli would yield them a desired outcome, say, orange juice; then in this case when presented with an incongruent stimuli – a red spade- would we be able to differentially manipulate the resolution of incongruence; that is those motivated to see red would report seeing a ‘red spade’ and those motivated to see a heart would report a ‘black heart’ . Or is the effect modality specific with effects on color more salient than on form. Is it easier to see a different color than it is to see a different form? And is this related to the modality specific Sham’s visual illusion that has asymmetry in the sense that two beeps, one flash leads to perception of two flashes easily but not vice versa.

Should you read my blog or my short-stories/poems?

BPS research digest has reported on an interesting study that has found that lifetime reading of fiction leads to enhanced empathetic abilities, while reading non-fiction predominantly leads to the converse effects. although, the study suffers from some limitations ( the usual correlation is not causation and empathetic people might be more drawn to fiction rather than it being the other way round) as well as methodological constraints ( it used familiarity of fiction and non-fiction writers’ name as a criterion for exposure to that genre: by this measure I would do well in both cases, as I was a very prolific fiction reader earlier, but in the recent years have been reading non-fiction almost exclusively…so my familiarity with fiction authors doesn’t reflect my current fiction exposure), but still the results are tantalizing and the implications profound.

For me that raises the question of whether I should also occasionally post some of my short stories on this blog, in a bid to balance the drop in empathy that my readers will undergo by reading my non-fiction!!

There is another interesting study highlighted in this week’s BPS digest, that reveals that a thicker corpus-callosum is required for a right-brain hemisphericality (is it that a thicker corpus callosum ensures that the right mechanism is in place (more communication between the hemispheres) to ensure that the more feminine, talkative 🙂 , holistic right brain is able to become dominant ? Or is it the other way round, that right brain dominance causes more interconnections between the hemispheres and leads to a thicker callosum?)

Steps for the evolution and devlopment of languages

There is an interesting post at the Babel’s Dawn, highlighting the work of David Rose in relation to SFL.

As per David, some pre-requisites are required for the evolution and development of languages as we know them.

Four conditions are suggested for developing explanatory models that may account for these linguistic phenomena. These include (a) a mechanism for reproducing complex cultural behaviors intergenerationally over extended time, (b) a sequence by which articulated wordings could evolve from nonlinguistic primate communication, (c) extension of the functions of wording from enacting interpersonal interactions to representing speakers’ experience, and (d) the emergence of complex patterns of discourse for delicately negotiating social relations, and for construing experience in genres such as narrative. These conditions are explored, and some possible steps in language evolution are suggested, that may be correlated with both linguistic research and archaeological models of cultural phases in human evolution.

Edmund Bolles summarizes this as below:

Rose’s four steps required for the growth and survival of language are:

  1. reproducibility: along with the “suite of biological adaptations” for speaking, there has to be some “mechanism” for precisely reproducing the language that happens to be spoken wherever one happens to be born. Many inquiries into language acquisition assumed this reproducibility is purely biological, but Roses insists that language is reproduced across generations “by cultural means.” In other words, children learn language from their elders. We will see on this blog that this explanation is not accepted quite as widely as a novice might think. One thing is clear, we got this skill after we said goodbye to the chimpanzee’s line of descent.
  2. exchangeability: Once speakers have the ability to reproduce words they can “exchange” them. Rose takes the idea of an exchange of words more literally than I do; thus he talks about “exchange behavior” in primates, but the basic idea of being able take and modify one another’s existing words to create new ones appears sound enough. The interesting thing about such interactions is that both parties in the exchange “get” it. The usage is understood as a bit of wit or cleverness rather than as an error, so wit too is something added to our species when we had parted from the surviving primates.
  3. extendibility: one very peculiar quality of humans is what a resourceful species we are, able to turn established tools to new tasks as the purpose demands. A digging tool becomes a backscratcher becomes a probe. Equally, we can extend the uses of our verbal tools. Thus, words which were surely first “exchanged” as tools for interpersonal actions could be extended for use in expressing ideas and then extended again to be used in thinking through some complex set of ideas. At this point biology is left in the dust as the role of language is extended at a pace that far outdistances plodding natural selection.
  4. combinability: the various extensions of speech can be combined to produce still more verbal wonders, such as stories and polite behavior that lets people negotiate delicate situations without giving offense. At this point we can speak of craft, maybe even artistry. Speech, thought, and culture has moved so far from its primate roots that the idea of common descent becomes surprising.

To me these bring to mind the more genetic and physical (as opposed to the cultural based that Rose presumes them to be) pre-requisites for language, in particular, and symbolic manipulation in general, that Premack had outlined recently. I had commented on the same earlier by integrating those with the existing stage-based developmental model of language evolution/development.

I’ll briefly recap the pre-requisites that Premack had identified:

  • Voluntary Control of Motor Behavior. Premack argues that because both vocalization and facial expression are largely involuntary in the chimpanzee, they are incapable of developing a symbol system like speech or sign language.
  • Imitation. Because chimpanzees can only imitate an actor’s actions on an object, but not the actions in the absence of the object that was acted upon, Premack suggests that language cannot evolve. .
  • Teaching. Premack claims that teaching behaviors are strictly human, defining teaching as “reverse imitation” – in which a model actor observes and corrects an imitator.
  • Theory of Mind. Chimps can ascribe goals to others’ actions, but Premack suggests these attributions are limited in recursion (i.e., no “I think you thought he would have thought that.”) Premack states that because recursion is a necessary component of human language, and because all other animals lack recursion, they cannot possibly evolve human language.
  • Grammar. Not only do chimps use nonrecursive grammars, they also use only words that are grounded in sensory experience – according to Premack, all attempts have failed to train chimps to use words with meanings grounded in metaphor rather than sensory experience.
  • Intelligence. Here Premack suggests that the uniquely human characteristics of language are supported by human intelligence. Our capacity to flexibly recombine pieces of sensory experience supports language, while the relative lack of such flexibility in other animals precludes them from using human-language like symbol systems.

To me, the Imitation and Teaching seem to be the cognitive mechanisms by which reproducibility of languages across cultures and generation is ensured.

Theory of mind abilities would definitely be utilized and instrumental in the process of excahngeability, whereby one can use tokens like words to exchange meanings. For this mechanism to evolve, an ability to understand that others have mental states that are similar to us is necessary and only then can one comprehend what that person means when he uses a particular token. Also, the mirror system , that might be involved in ToM module , may also be sufficient to explain the evolution of linguistic words from non-linguistic communication.

Grammatical abilities like recursion and ability to use metaphors can be directly mapped to the capabilities like combinability and extendability, whereby complex linguistic devices can be combined to produce complex discourses and novel metaphors used for extending the semantics associated with a word.

I’m quite intrigued and excited by such commonalities! Does this excite you too? Let me know via comments.

generic vs specific feedback and the fundamental attribution error

A recent study indicates that giving generic trait-based feedback to children ( in the form of “you are a good drawer”) increases feeling of helplessness on subsequent mistakes/failures and reduces their resilience in the face of failure in comparison to the condition in which they are given specific outcome-based feedback (of the form ” you drew a good drawing”). It is thus apparent that when generic praise is given, then this results in a stable inborn talent-like view of the self abilities, while a specific praise enforces more a concept of skill-based self ability that may be affected by circumstances and can be worked on and acquired.

Generic praise implies there is a stable ability that underlies performance; subsequent mistakes reflect on this ability and can therefore be demoralizing. When criticized, children who had been told they were “good drawers” were more likely to denigrate their skill, feel sad, avoid the unsuccessful drawings and even drawing in general, and fail to generate strategies to repair their mistake. When asked what he would do after the teacher’s criticism, one child said, “Cry. I would do it for both of them. Yeah, for the wheels and the ears.” In contrast, children who were told they had done “a good job drawing” had less extreme emotional reactions and better strategies for correcting their mistakes.

It is interesting to read this along with the fundamental attribution error, which was the theme of my blogger SAT challenge essay. As per this bias, people have an inherent bias to view their successes in terms of stable underlying talents/traits and failures as reflective of external circumstances. The reasoning reverses when applied to others. Others fare well due to luck (or external circumstances) and fare badly due to dispositional elements.

From the above study, it is clear that though the fundamental attribution error may serve us well (after all it has to serve a purpose for it to evolve), say by increasing our feelings of self-efficacy and thus leading to greater confidence/esteem, yet it has its downside. It makes learning from our mistakes harder and leads to feelings of helplessness or that of external locus of control, when faced with failures. This rationalization of failures due to our helplessness (despite perceived stable talent/trait) , and due to the external circumstances ( and not as due to some carelessness or lack of effort on our part on this specific circumstance) also leads to less resilience in the face of failures and less motivation to indulge in similar activity in the future.

It is apparent thus, that while giving positive feedback to children, it is framed in specific outcome based terms, so that they do not fall prey to the fundamental attribution bias and pay more emphasis on skill-based accounts rather than talent-based accounts. Conversely, it may be plausible to presume that while giving negative feedback it would be best to be direct and point any underlying issue that the child may have and not gloss them over by providing environmental explanations. The child would anyway make up environmental excuse for the failures!

While inspiring the child to undergo observational learning, one should presumably describe others and their success as resulting from stable traits/ skills and should explain their failures due to circumstance not in their control. This would go a long way in making the child overcome his inherent attribution bias and help in lead to a generally positive and compassionate view of others and a resilient and humble view of himself.