The five minds—disciplined, synthesizing, creating, respectful, and ethical—differ from multiple intelligence in working in a more synergistic fashion as opposed to separate categories of intelligences.
The “disciplined mind,” Gardner argues, is not simply knowing a particular subject but “learning to think the way people who are experts in the field think,” and should develop by the end of secondary school.
The second type of mind, the “synthesizing mind,” is defined by “deciding what to focus on, what’s important, what to ignore, and putting that together in a way that makes sense.” With a dearth of information about synthesizing in textbooks, Gardner has become most intrigued by this concept. Gardner considers himself primarily a synthesizer, but now as a “fish that has suddenly discovered he’s in water,” Gardner is faced with the challenge of uncovering what goes on as people synthesize, what is good versus bad synthesis, and how to enhance the process.
Discussing the creative mind, Gardner points out that today “creating is a premium and not an option.” While one needs a certain amount of discipline and synthesizing to create, too much of either will stifle creativity.
To foster creativity in the classroom, Gardner recommends that teachers “model novel approaches and answers to questions and indicate [to students] that those responses are legitimate.” Students should be encouraged to come up with innovative approaches, discussing ideas that did not work and alternative models. There should also be study of “examples of creative ideas, actions, behaviors,” figuring out how success was attained, and what obstacles had to be overcome.
While the first three minds are more cognitively oriented, the last two, respect and ethics, have more to do with personality and emotion. The respectful mind, Gardner indicated, has to do with “how we think and relate to other people, most importantly to other people around us.”
While this mind develops at a relatively young age, a kind of intuitive altruistic sense of reaching out to those around us, “attempting to understand differences and work with them,” the ethical mind is more abstract, and generally develops during adolescence. It has to do with fulfilling one’s responsibility in the world in terms of job role and as citizen, thinking in terms such as: “I’m a teacher…journalist…physicist, carrying out that role in the most professional way I can.”
Although, Gardner thinks that only the last two types of mind are related to personality and emotion, I believe that the first three types of ‘cognitive’ minds can also be related to personality types, as it is my contention that personality dimensions are just different styles of cognition and emotion.
The disciplined mind utilizes the Conscientiousness traits of self-discipline, carefulness, thoroughness, orderedness, and deliberation to develop the thinking style marked by mastering the conventional way in which the experts familiar with the domain usually think.
The synthesizing mind, utilizes the Neuroticism traits that basically refer to an ability or inability to deal with environmental stimuli in a meaningful way. While discussions of neuroticism are usually couched in emotional terms-more reactive sympathetic nervous system, and more sensitivity to environmental stimulation – I also belive that there is a cognitive dimension here, that pertains to whether one reacts to all and every stimulus (information) or is more ‘cognitively calm and composed’ and uses deliberation in sorting the relevant information from irrelevant one rather than reacting to every little information nugget. This precisely is the synthesizing mind – able to focus on what is important and the ability to not get burdened by information overload. This is the emotional equivalent of not getting overwhelmed by environmental stress.
The creative mind, I believe, utilizes the Openness to Experience traits like unconventional and individualistic beliefs,broad interests, novelty preference and imagination to indulge in a thinking style that is marked with creativity- the ability to create something novel.
The respectful mind, utilizes the Agreeableness traits of consideration, friendliness, generosity, helpfulness and concern with cooperation and social harmony to indulge in a thinking style that is imbibed with an altruistic sense of reaching out to those around us, “attempting to understand differences and work with them.”
The ethical mind, on the other hand, utilizes the Extraversion traits of enjoying human interactions, enthusiasm, talkativeness, assertiveness, gregariousness and pleasure in social interactions to indulge in a thinking style marked with emphasis on activity and social role and responsibility – the precise recipe for the ethical mind!
Gardner also proposes a relationship/ hierarchy between the five minds.
In the latter part of his book, Gardner explores the interaction between five minds. He doesn’t see them as isolated categories, but as a general taxonomy followed by respect before ethics, discipline before synthesis, ultimately creating.
This implication of a developmental framework, in which the order of development is – discipline, synthesis, respect, ethics and creativity – maps very well to my own obsession with a five stage developmental model of cognitive, moral, perspective-taking, linguistic , symbolic, pretend-play and other abilities. I believe that Gardner has got the order wrong, and the traits (and the Five minds) develop in the following order- Neuroticism, conscientiousness, Extraversion, agreeableness and finally Openness to Experience. I may be wrong here, but I would write in detail on my rationale for this developmental path in a subsequent post.
While it is reasonable to stop here, I am tempted to take the analogies further and link this up with the Five Faces of the Genius.
To me, the Fool epitomizes perseverance and thus a Disciplined and Conscientious mind.
The Observer epitomizes ability to pick a needle from a haystack and thus a Synthesising and a low Neurotic (cognitively stable) mind.
The Alchemist, with its focus on active bridging and connection between domains, seems to reflect an ethical and extraverted mind.
The Seer, with an ability to imagine and visualize, may have a corresponding capacity to imagine and feel other;s emotions and this empathy leading it to have a respectful and Agreeable mind.
The Sage, with its ability to simplify, may find a resonance in the openness traits of ‘preferring the plain, straightforward, and obvious over the complex, ambiguous, and subtle’ and may be linked to the creative and Open mind!
Do let me know, how you find these conjectures and linkages. I hope I am not using the analogical reasoning of the alchemist to an unacceptable extreme!! Even if I am, you can be sure that it is just due to my high energy levels and my ethical concerns!!
I am Sandeep Gautam, your host for tonight, and it is my pleasure to walk you through this brand new episode of the brain carnival called Encephalon.
Before I start, my co-host for tonight, Caroline from the SharpBrains blog, would like to request you to turn off the music, unplug your headphones and concentrate solely on the stories presented in this carnival, to the exclusion of everything else. She recently found that we have limited attentional capacities and that is the reason why we turn down the radio, when we are lost and trying hard to find the correct route. Its common knowledge, that browsing through a collection of cutting edge science posts, from around the world, can be quite taxing on your attentional capacities and there is no room for divided attention here. For those addicted to music, she has some advice to offer – do a simple multitasking task at first and practice before moving on to this more complex task. So all you music addicts are advised to readthemousetraparchives (which evidently don’t require much processing or brain use at all) and practice on that easier task first!! Let me now start with a short recap (Did you every watch a show that didn’t have a ‘short’ recap?). Last time the Mouse Trap hosted a carnival (Synapse #6), we took the readers on a historical journey , where staff correspondent The Neurophilosopher, recounted the story of how Neurons were discovered. That show went on to create history (it is one of the most viewed and popular story on the Neurophiliosopher’s blog). This week the Neurophilosopher continues his historical voyage and explores how the nerve function and structure were discovered. While the initial enchantment with the ‘animal spirits’/ ‘humour’ theory led to the emergence of related concepts like the Sanguine, Choleric,Phlegmatic and Melancholic temperaments; the latter day intoxication of Descartes with spirits, was instrumental in the emergence of the hydraulic metaphor for brain/ nerves/ emotions. The conceptualization of nerves, starting from ‘hollow tubes’, to that of conductors of ‘animal electricity’, to the modern notions of synaptic chemical transmission and voltage gate function of ion channels has involved the joint effort of many outstanding luminaries, endowing Noble prizes, on three pairs of scientists, along the way. It is also heartening to note, that Andrew Vesalius, in his ‘On the Fabric of the Human Body’, was not haunted by the animal spirits and was able to take a more rational stand. That may explain why he is one of the authors making it to the shortlist of 25 best science books ever. (This edition of Encephalon is going very good by far: I have already managed to plug in references to some of my own posts!!) We all love a good debate , don’t we? While it has become increasingly unnecessary to defend evolution against the tirade of Intelligent Designers/ Creationists, yet someone has to take the cudgels, every now and then, and expose the IDers for what they are . In the Debunking section, PZ Myers, of the Pharyngula, responds to the continuing fascination of IDers with the Eye as a designed object, and drives home the point that the presence of shared, deep elements in the diverse and different types of eyes found in the natural world, is reflective of a common descent. He starts off with an article on A Panda’s thumb, that argues that the backward layout of vertebrate eye (with nerves and blood vessels placed before the photorecpetors and in the path of incoming light) is a bad design and a quirk of evolutionary history and does not confer any said advantages like the ‘cooling of retina’. While PZ Myers, concedes the possibility that an imperfect design and multiple types of eyes, can still be explained by IDers as the result of an Incompetent Designer (on the other hand one can argue that the fact that there are so many different kinds of eyes, each suited to the organism that has it, is proof of a watchmaker, that designs different watches for different needs- a sports watch for trendy youths, a classical gold -plated watch for aged people, and a gizmo-heavy watch for the geek) , he shows that the shared elements (opsins) in the rhobodermic (invertebrate) and cilliary (vertebrate) eyes point towards a common historical descent and are part of the same phylogenetic tree. This makes evolution as the prime candidate for explanation of eye features as they exist. Interesting to note, that c-opsins are also present in the invertebrates and used in Circadian rhythms, while r-opsins are also found in vertebrates and are implicated in circadian rhythm resetting. Well, IDers can still use this as an ammunition for their theories: claiming that the r-opsins in humans is the mythological Shiva’s Third Eye. One can play the devil’s advocate (I like this part and would gladly do the honors) and claim that two types of eye systems -one r-based, the other c-based are also logical outcomes of physical facts- just like two systems of watches exists – analog and digital – so also do the physical facts of perception decree that 2 types of eyes can be possible – one r-based and the other c-based – and their presence in vertebrates and invertebrates does not point to common descent, but only spurious relationships. I’ll let PZee thrash these arguments in his next posting.
Meanwhile, we keep pipping Mythbusters to the post, with vigilant reporters not only debunking the old and haunting myths (like that of a non-blind watchmaker), but also actively nipping in the bud, new myths as they are being formulated and proposed. One such myth is that of exaggerated differences in Male and Female brains and abilities, and Jake Young from Pure Pedantry has covered this earlier too. This time, he returns to examine the extreme, ‘male brain’, systematizing theory of Autism, and concludes that if extreme male interests/abilities are indeed a symptom of autism, then in the light of the fact that male-female differences are largely socio-cultural, while autism is largely genetic, one can only conclude that the differences in systematizing are an epiphenomena, and not a cause. Moreover, the theory of assortative mating that Simon Baron-cohen proposes , as well as his emphasis on systematizing, to the exclusion of the other major symptoms of autism like social and communicative difficulties, appear lacking and non-comprehensive. Repetitive behavior can be adequately explained by systematising, but how can something as elementary as eye contact aversion follow from geekiness or nerdness of the autistic boy and be a consequence rather than a cause? While the theory of Autism may be quite controversial and how to help children with Autism not clear, yet for the normal , anxiety and stress- driven, school-going child, we have some hope. They can now cope with the stresses, increase focus and manage emotions, all by themselves. Alvaro, from the SharpBrains blog, reports on an exciting biofeedback program that has managed to improve the performance of children appreciably by providing them feedback about their own stress levels -measured as heart rate variability- and encouraging them to use meditation techniques like Yoga to calm down in stressful situations.
Ok folks, Its time to take a break! See you after the commercials! (All good programs do have commerical breaks!)
But in this commercial break, you will not be flooded with Advertisements that purport to increase your —— to double its size. (Hey guys, what are you thinking, that —– was to be filled with a brain muscle name. I can assure you the reference was in no way related to ‘what the normal male thinks about every 2 minutes’!) Instead, in this break, Joe Kissel , of the Interesting Thing Of The Day blog, would like you all to take a Power Nap. No need to watch the commercials. Just take a short power nap- and return rejuvenated – with improved memory, attention and cognitive performance. If sleeping is not your cup of tea, resort to Power Blogging (do remember to quote me if you use this term, I invented it just now!) instead. Fernette and Brock Eide at the Eide Neurolearning blog, report on how blogging increases various cognitive abilities like critical, associational and analogical thinking. But just like the Power Nap, keep your Power Blog posting of a reasonable length. While a long nap would leave you groggy and unable to work, a long post may not have the same effect on you, but would definitely end up making your readers groggy and distracted. believe me, I know from personal experience!
Ok, Welcome back! After the break, we take you out from our studio, and into the fields, where actual stuff happens. Our special correspondent, Chris Patil, of Ouroboros blog, was covering the annual scientific meeting of the Larry L Hillblom Foundation, and reports straight form the filed on the strategy of passive immunization for Alzheimer’s. The procedure involves giving the antibodies, that target amyloid Abeta oligomeres, directly to the patients. Interestingly, these antibodies also target IAPP, thought to be instrumental in type II diabetes and may offer some help in curing that disease too. As the prevalence of Diabetes in India is quite high (and as I have a family history of this disease), I’ll surely be following the developments here. Its show time folks! Michael, from the Peripersonal space, presents a retrospective of Charles and Ray Eames film and multimedia work. The makers of such films like the Powers of Ten they are also famous for the creation of the Eames chairand frequently employed and incorporated the latest cognitive psychology concepts in their films and presentations. For example, in their Rough sketch for a sample lesson of a hypothetical course, they not only made efficient use of visuals and sounds (loud enough to make you feel vibrations), but also incorporated smells piped through the ventilation system. The effects were striking, with people smelling oil when seeing it, when no odor was actually present, but because they expected a smell just like they had received for the other scenes.
Odor is strongly linked with memory, and as Vaughan from the Mind Hacks blog, had highlighted, the retronasal olfactory system is also strongly linked with Flavor or Taste Perception. So, with the correct use of technology, (flavor odors presented when people gasp after seeing a visual and are exhaling air and are thus using retro nasal system), one can even induce the sense of taste. When sight, sound,smell, touch (vibrations due to loud sounds) and flavors are combined in a presentation, I am sure the results would be terrific. Michael, specializes in peripersoanl space and the associated proprioception sense, so I am sure we can even include proprioceptive, vestibular and kinesthetic effects in the future presentations! Meanwhile, Michael continues on his exploration of psychological themes and concepts in Charles and Ray Eames work, and proposes that the reason they used seven simultaneous screens in Glimpses Of USA, may have been partly due to the known 7+-2 limits of the working memory and how having seven screens would force viewers to sample from all of them without being overwhelmed. While we are talking about show business, let us also indulge in some celebrity gossip. Everyone knows that the alpha male in chimpanzees, is equivalent to the human celebrity, but nobody had though that chimps too indulged in celebrity worship. Olivier, from the AplhaPsy blog, reports on how his job as a paparazzi, was finally rewarded, when he came across a striking conclusion – that the other chimps, when they were replicating a social-convention, were not actually learning a convention at all. They were just imitating the celebrity, the alpha male, and that the conclusions derived about a theory of mind or social-convention learning in chimps, based on this experimental setup, are flawed.
In our chat section, Alvaro from the SharpBrains blog, makes some Hard Talk with Dr Brett N Steerbarger, who has written extensively on trading and the psychology that is involved in improving trading behavior. They discuss how concepts of structured learning, continuous feedback, self motivation and developing an expertise in a niche are relevant in the context of improving trading performance. If you are a short term trader, you need to see patterns quickly and so need to increase your processing speed and working memory. For long term traders, analytical skills are paramount, while everybody can benefit from emotion management. Expert traders, like all experts in their fields, are a result of skills that are practiced, honed and fine-tuned, sometimes under the instructions of a coach. We are sure you would increase your trading capacities immensely if you took this advice seriously and indulged in some trader specific training. Don’t forget to share your increased revenues with this humble blog at that time!! While expertise, as a result of hard work, rather than training, is one of the most debated issues in Intelligence Psychology, another issue that keeps cropping is the nature of intelligence. Is there an underlying ‘g’ factor, or is the correlation between the IQ test explicable otherwise. Hugo, from the AlphaPsy blog has the second opinion and reports on a new paper that does away with an underlying ‘g’ and explains the correlation in terms of effects of one ability on the other.
In our last section (I can anticipate your relief!), we look towards the future and anticipate future trends. IB, from the Fibromyalgia Research Blog, reports on a recent study that found abnormal cerebral activation (increased neural recruitment) during cognitive tasks in Fibromyalgia and Chronic Fatigue Syndrome. He suggests that future study be focussed on finding the neurocognitive mechanisms underlying cognitive deficits like abnormally slow brain waves and sleep disturbances that are found in Fibromyalgia. Before you leave, our sponsors, Dr Kavokin, from the Rdoctor blog, have some exciting gifts for you. Quickly answer a short quiz about low back pain and take home some cool prizes. While you can savor the quiz at your leisure, I would like to highlight question # 4 regarding whether smoking relieves the back pain or exacerbates it. That question has direct significance to us, as it indicates how mental attitudes affect physical illnesses. Rush in your entries or SMS TRUE/ FALSE on our hot line number 0000. You can also e-mail your answers to email@example.com. Exciting prizes like laptops, iPods and windows viruses are waiting for you!
Thats all for tonight. We will return in a fortnight’s time, with the 12th episode of the Encephalon, same time, same day. Don’t forget to tune in. Your hosts for that show would be Hugo, Olivier et al at the AlphaPsy blog . The day is 4th of December.
For now, please allow your host to thank all the behind-the-scenes persons – the actual contributors!! Thanks and good night!!
Descartes held that non-human animals are automata: their behavior is explicable wholly in terms of physical mechanisms. He explored the idea of a machine which looked and behaved like a human being. Knowing only seventeenth century technology, he thought two things would unmask such a machine: it could not use language creatively rather than producing stereotyped responses, and it could not produce appropriate non-verbal behavior in arbitrarily various situations (Discourse V). For him, therefore, no machine could behave like a human being. (emphasis mine)
To me this seems like a very reasonable and important speculation: although we have learned a lot about how we are able to generate an infinite variety of creative sentences using the generative grammar theory of Chomsky (I must qualify, we only know how to create a new grammatically valid sentence-the study of semantics has not complimented the study in syntax – so we still do not know why we are also able to create meaningful sentences and not just grammatically correct gibberish like “Colorless green ideas flow furiously” : the fact that this grammatically correct sentence is still interpretable by using polysemy , homonymy or metaphorical sense for ‘colorless’, ‘green’ etc may provide the clue for how we map meanings -the conceptual Metaphor Theory- but that discussion is for another day), we still do not have a coherent theory of how and why we are able to produce a variety of behavioral responses in arbitrarily various situations.
If we stick to a physical, brain-based, reductionist, no ghost-in-the-machine, evolved-as-opposed-to-created view of human behavior, then it seems reasonable that we start from the premise of humans as an improvement over the animal models of stimulus-response (classical conditioning) or response-reinforcement (operant conditioning) theories of behavior and build upon them to explain how and what mechanism Humans have evolved to provide a behavioral flexibility as varied, creative and generative as the capacity for grammatically correct language generation. The discussions of behavioral coherence, meaningfulness, appropriateness and integrity can be left for another day, but the questions of behavioral flexibility and creativity need to be addressed and resolved now.
I’ll start with emphasizing the importance of response-reinforcement type of mechanism and circuitry. Unfortunately most of the work I am familiar with regarding the modeling of human brain/mind/behavior using Neural Networks focuses on the connectionist model with the implicit assumption that all response is stimulus driven and one only needs to train the network and using feedback associate a correct response with a stimulus. Thus, we have an input layer for collecting or modeling sensory input, a hidden association layer and an output layer that can be considered as a motor effector system. This dissociation of input acuity, sensitivity representation in the form of input layer ; output variability and specificity in the form of an output layer; and one or more hidden layers that associate input with output and may be construed as an association layer maps very well to our intuitions of a sensory system, a motor system and an association system in the brain to generate behavior relevant to external stimuli/situations. However, this is simplistic in the sense that it is based solely on stimulus-response types of associations (the classical conditioning) and ignores the other relevant type of association response-reinforcement. Let me clarify that I am not implying that neural networks models are behavioristic: in the form of hidden layers they leave enough room for cognitive phenomenon, the contention is that they not take into account the operant conditioning mechanisms. Here it is instructive to note that feedback during training is not equivalent to operant-reinforcement learning: the feedback is necessary to strengthen the stimulus-response associations; the feedback only indicates that a particular response triggered by the particular stimuli was correct.
For operant learning to take place, the behavior has to be spontaneously generated and based on the history of its reinforcement its probability of occurrence manipulated. This takes us to an apparently hard problem of how behavior can be spontaneously generated. All our life we have equated reductionism and physicalism with determinism, so a plea to spontaneous behavior seems almost like begging for a ghost-in-the-machine. Yet on careful thinking the problem of spontaneity (behavior in absence of stimulus) is not that problematic. One could have a random number generator and code for random responses as triggered by that random number generator. One would claim that introducing randomness in no way gives us ‘free will’, but that is a different argument. What we are concerned with is spontaneous action, and not necessarily, ‘free’ or ‘willed’ action.
To keep things simple, consider a periodic oscillator in your neural network. Lets us say it has a duration of 12 hours and it takes 12 hours to complete one oscillation (i.e. it is a simple inductor-capacitor pair and it takes 6 hours for capacitor to discharge and another 6 hours for it to recharge) ; now we can make connections a priori between this 12 hr clock in the hidden layer and one of the outputs in the output layer that gets activated whenever the capacitor has fully discharged i.e. at a periodic interval of 12 hours. Suppose that this output response is labeled ‘eat’. Thus we have coded in our neural networks a spontaneous mechanism by which it ‘eats’ at 12 hour durations.
Till now we haven’t really trained our neural net, and moreover we have assumed a circuitry like a periodic oscillator in the beginning itself, so you may object to this saying this is not how our brain works. But let us be reminded that just like normal neurons in the brain which form a model for neurons in the neural network, there is also a suprachiasmatic nuclei that gives rise to circadian rhythms and implements a periodic clock.
As for training, one can assume the existence of just one periodic clock of small granularity, say 1 second duration in the system, and then using accumulators that code for how many ticks have elapsed since past trigger, one can code for any arbitrary periodic response of greater than one second granularity. Moreover, one need not code for such accumulators: they would arise automatically out of training from the other neurons connected to this ‘clock’ and lying between the clock and the output layer. Suppose, that initially, to an output marked ‘eat’ a one second clock output is connected (via intervening hidden neuron units) . Now, we have feedback in this system also. Suppose, that while training, we provide positive feedback only on 60*60*12 trials (and all its multiples) and provide negative feedback on all other trials, it is not inconceivable to believe that an accumulator neural unit would get formed in the hidden layer and count the number of ticks that come out of the clock: it would send the trigger to output layer only on every 60*60*12 th trial and suppress the output of the clock on every other trial. Viola! We now have a 12 hour clock (which is implemented digitally using counting ticks) inside our neural network coding for a 12 hour periodic response. We just needed to have one ‘innate’ clock mechanism and using that and the facts of ‘operant conditioning’ or ‘response-reinforcement’ pairing we can create an arbitrary number of such clocks in our body/brain. Also, please notice the fact, that we need just one 12 hour clock, but can flexibly code for many different 12 hour periodic behaviors. Thus, if the ‘count’ in accumulator is zero, we ‘eat’; if the count is midway between 0 and 60*60*12, we ‘sleep’. Thus, though both eating and sleeping follow a 12 hour cycle, they do not occur concurrently, but are separated by a 6 hour gap.
Suppose further, that one reinforcement that one is constantly exposed to and that one uses for training the clock is ‘sunlight’. The circadian clock is reinforced, say only by the reinforcement provided by getting exposed to the mid noon sun, and by no other reinforcements. Then, we have a mechanism in place for the external tuning of our internal clocks to a 24 hour circadian rhythm. It is conceivable, that for training other periodic operant actions, one need not depend on external reinforcement or feedback, but may implement an internal reinforcement mechanism. To make my point clear, while ‘eat’ action, i.e. a voluntary operant action, may get generated randomly initially, and in the traditional sense of reinforcement, be accompanied by intake of food, which in the classical sense of the word is a ‘reinforcement’; the intake of food, which is part-and-parcel of the ‘eat’ action should not be treated as the ‘feedback’ that is required during training of the clock. During the training phase, though the operant may be activated at different times (and by the consequent intake of food be intrinsically reinforced) , the feedback should be positive only for the operant activations inline with the periodic training i.e. only on trials on which the operant is produces as per the periodic training requirement; and for all other trails negative feedback should be provided. After the training period, not only would operant ‘eat’ be associated with a reinforcement ‘food’: it would also occur as per a certain rhythm and periodicity. The goal of training here is not to associate a stimulus with a response ( (not the usual neural networks association learning) , but to associate a operant (response) with a schedule(or a concept of ‘time’). Its not that revolutionary a concept, I hope: after all an association of a stimulus (or ‘space’) with response per se is meaningless; it is meaningful only in the sense that the response is reinforced in the presence of the stimulus and the presence of the stimulus provides us a clue to indulge in a behavior that would result in a reinforcement. On similar lines, an association of a response with a schedule may seem arbitrary and meaningless; it is meaningful in the sense that the response is reinforced in the presence of a scheduled time/event and the occurrence of the scheduled time/event provides us with a reliable clue to indulge in a behavior that would result in reinforcement.
To clarify, by way of an example, ‘shouting’ may be considered as a response that is normally reinforcing, because of say its being cathartic in nature . Now, ‘shouting’ on seeing your spouse”s lousy behavior may have had a history of reinforcement and you may have a strong association between seeing ‘spouse’s lousy behavior’ and ‘shouting’. You thus have a stimulus-response pair. why you don’t shout always, or while say the stimuli is your ‘Boss’s lousy behavior’, is because in those stimulus conditions, the response ‘shouting’, though still cathartic, may have severe negative costs associated, and hence in those situations it is not really reinforced. Hence, the need for an association between ‘spouse lousy behavior’ and ‘shouting’ : only in the specific stimulus presence is shouting reinforcing and not in all cases.
Take another example that of ‘eating’, which again can be considered to be a normally rewarding and reinforcing response as it provides us with nutrition. Now, ‘eating’ 2 or 3 times in a day may be rewarding; but say eating all the time, or only on 108 hours periodicity may not be that reinforcing a response, because that schedule does not take care of our body requirements. While eating on a 108 hours periodicity would impose severe costs on us in terms of under nutrition and survival, eating on 2 mins periodicity too would not be that reinforcing. Thus, the idea of training of spontaneous behaviors as per a schedule is not that problematic.
Having taken a long diversion, arguing for a case for ‘operant conditioning’ based training of neural networks, let me come to my main point.
While ‘stimulus’ and the input layer represent the external ‘situation’ that the organism is facing, the network comprising of the clocks and accumulators represent the internal state and ‘needs’ of the organism. One may even claim, a bit boldly, that they represent the goals or motivations of the organism.
A ‘eat’ clock that is about to trigger a ‘eat’ response, may represent a need to eat. This clock need not be a digital clock, and only when the 12 hour cycle is completed to the dot, an ‘eating’ act triggered. Rather, this would be a probabilistic, analog clock, with the ‘probability’ of eating response getting higher as the 12 hour cycle is coming to an end and the clock being rest, whenever the eating response happens. If the clock is in the early phases of the cycle (just after an eating response) then the need for eating (hunger) is less; when the clock is in the last phases of the cycle the hunger need is strong and would likely make the ‘eating’ action more and more probable.
Again, this response-reinforcement system need not be isolated from the stimulus-response system. Say, one sees the stimulus ‘food’, and the hunger clock is still showing ‘medium hungry’. The partial activation of the ‘eat’ action (other actions like ‘throw the food’, ignore the food, may also be activated) as a result of seeing the stimulus ‘food’ may win over other competing responses to the stimuli, as the hunger clock is still activating a medium probability of ‘hunger’ activation and hence one may end up acting ‘eat’. This however, may reset the hunger clock and now a second ‘food’ stimulus may not be able to trigger ‘eat’ response as the activation of ‘eat’ due to ‘hunger clock’ is minimal and other competing actions may win over ‘eat’.
To illustrate the interaction between stimulus-response and response-reinforcement in another way, on seeing a written word ‘hunger’ as stimulus, one consequence of that stimulus could be to manipulate the internal ‘hunger clock’ so that its need for food is increased. this would be simple operation of increasing the clock count or making the ‘need for hunger’ stronger and thus increasing the probability of occurrence of ‘eat’ action.
I’ll also like to take a leap here and equate ‘needs’ with goals and motivations. Thus, some of the most motivating factors for humans like food, sex, sleep etc can be explained in terms of underlying needs or drives (which seem to be periodic in nature) and it is also interesting to note that many of them do have cycles associated with them and we have sleep cycles or eating cycles and also the fact that many times these cycles are linked with each other or the circadian rhythm and if the clock goes haywire it has multiple linked effects affecting all the motivational ‘needs’ spectrum. In a mainc pahse one would have low needs to sleep, eat etc, while the opposite may be true in depression.
That brings me finally to Marvin Minsky and his AI attempts to code for human behavioral complexity.
In his analysis of the levels of mental activity, he starts with the traditional if, then rule and then refines it to include both situations and goals in the if part. To me this seems intuitively appealing: One needs to take into account not only the external ‘situation’, but also the internal ‘goals’ and then come up with a set of possible actions and maybe a single action that is an outcome of the combined ‘situation’ and ‘goals’ input.
However, Minsky does not think that simple if-then rules, even when they take ‘gaols’ into consideration would suffice, so he posits if-then-result rules. To me it is not clear how introducing a result clause makes any difference: Both goals and stimulus may lead to multiple if-then rule matches and multiple actions activation. These action activations are nothing but what Minsky has clubbed in the result clause and we still have the hard problem of given a set of clauses, how do we choose one of them over other.
Minsky has evidently thought about this and says:
What happens when your situation matches the Ifs of several different rules? Then you’ll need some way to choose among them. One policy might arrange those rules in some order of priority. Another way would be to use the rule that has worked for you most recently. Yet another way would be to choose rules probabilistically.
To me this seems not a problem of choosing which rule to use, but that of choosing which response to choose given several possible responses as a result of application of several rules to this situation/ goal combination. It is tempting to assume that the ‘needs’ or ‘gaols’ would be able to uniquely determine the response given ambiguous or competing responses to a stimulus; yet I can imagine a scenario where the ‘needs’ of the body do not provide a reliable clue and one may need the algorithms/heuristics suggested by Minsky to resolve conflicts. Thus, I see the utility of if-then-result rules: we need a representation of not only the if part (goals/ stimulus) in the rule; which tells us what is the set of possible actions that can be triggered by this stimulus/ situation/ needs combo; but also a representation of the results part of the rule: which tells us what reinforcement values these response(actions) have for us and use this value-response association to resolve the conflict and choose one response over the other. This response-value association seems very much like the operant-reinforcement association, so I am tempted once more to believe that the value one ascribes to a response may change with bodily needs and rather is reflective of bodily needs, but I’ll leave that assumption for now and instead assume that somehow we do have different priorities assigned to the responses ( and not rules as Minsky had originally proposed) and do the selection on the basis of those priorities.
Though I have posited a single priority-based probabilistic selection of response, it is possible that a variety of selection mechanisms and algorithms are used and are activated selectively based on the problem at hand.
This brings me to the critic-selector model of mind by Minsky. As per this model, one needs both critical thinking and problem solving abilities to act adaptively. One need not just be good at solving problems- one also has to to understand and frame the right problem and then use the problem solving approach that is best suited to the problem.
Thus, the first task is to recognize a problem type correctly. After recognising a problem correctly, we may apply different selctors or problem solving strategies to different problems.
He also posits that most of our problem solving is analogical and not logical. Thus, the recognizing problem is more like recognizing a past analogical problem; and the selecting is then applying the methods that worked in that case onto this problem.
How does that relate to our discussions of behavioral flexibility? I believe that every time we are presented with a stimulus or have to decide how to behave in response to that stimulus, we are faced with a problem- that of choosing one response over all others. We need to activate a selection mechanism and that selection mechanism may differ based on the critics we have used to define the problem. If the selection mechanism was fixed and hard-wired then we wont have the behavioral flexibility. Because the selection mechanism may differ based on our framing of the problem in terms of the appropriate critics, hence our behavioral response may be varied and flexible. At times, we may use the selector that takes into account only the priorities of different responses in terms of the needs of the body; at other times the selector may be guided by different selection mechanisms that involve emotions and values us the driving factors.
Minsky has also built a hierarchy of critics-selector associations and I will discuss them in the context of developmental unfolding in a subsequent post. For now, it is sufficient to note that different types of selection mechanisms would be required to narrow the response set, under different critical appraisal of the initial problem. To recap, a stimulus may trigger different responses simultaneously and a selection mechanism would be involved that would select the appropriate response based on the values associated with the response and the selection algorithm that has been activated based on our appraisal of the reason for conflicting and competing responses. while critics help us formulate the reason for multiple responses to the same stimuli, the selector helps us to apply different selection strategies to the response set, based on what selection strategy had worked on an earlier problem that involved analogous critics.
One can further dissociate this into two processes: one that is grammar-based, syntactical and uses the rules for generating a valid behavioral action based on the critic and selector predicates and the particular response sets and strategies that make up the critic and selector clause respectively. By combining and recombining the different critics and selectors one can make an infinite rules of how to respond to a given situation. Each such rule application may potentially lead to different action. The other process is that of semantics and how the critics are mapped onto the response sets and how selectors are mapped onto different value preferences.
Returning back to the response selection, given a stimulus, clearly there are two processes at work : one that uses the stored if-then rules (the stimulus-response associations) to make available to us a set of all actions that are a valid response to the situation; and the other that uses the then-result rules (and the response-value associations, that I believe are dynamic in nature and keep changing) to choose one of the response from that set as per the ‘subjective’ value that it prefers at the moment. This may be the foundation for the ‘memory’ and ‘attention’ dissociations in working memory abilities used in stroop task and it it tempting to think that the while DLPFC and the executive centers determine the set of all possible actions (utilizing memory) given a particular situation, the ACC selects the competing responses based on the values associated and by selectively directing attention to the selected response/stimuli/rule.
Also, it seems evident that one way to increase adaptive responses would be to become proficient in discriminating stimuli and perceiving the subjective world accurately; the other way would be to become more and more proficient in directing attention to a particular stimulus/ response over others and directing attention to our internal representations of them so that we can discriminate between the different responses that are available and choose between them based on an accurate assessment of our current needs/ goals.
Using his ideas of sensorimotor function, Hughlings-Jacksondescribed two “halves” of consciousness, a subject half (representationsof sensory function) and an object half (representations ofmotor function). To describe subject consciousness, he usedthe example of sensory representations when visualizing an object. The object is initially perceived at all sensory levels.This produced a sensory representation of the object at allsensory levels. The next day, one can think of the object andhave a mental idea of it, without actually seeing the object.This mental representation is the sensory or subject consciousnessfor the object, based on the stored sensory information of theinitial perception of it.
What enables one to think of the object? This is the other halfof consciousness, the motor side of consciousness, which Hughlings-Jacksontermed “object consciousness.” Object consciousness is the facultyof “calling up” mental images into consciousness, the mentalability to direct attention to aspects of subject consciousness.Hughlings-Jackson related subject and object consciousness asfollows:
The substrata of consciousness are double, as we might inferfrom the physical duality and separateness of the highest nervouscentres. The more correct expression is that there are two extremes.At the one extreme the substrata serve in subject consciousness.But it is convenient to use the word “double.”
Hughlings-Jackson saw the two halves of consciousness as constantly interacting with each other, the subjective half providing a store of mental representations of information that the objective half used to interact with the environment.
The term “subjective” answers to what is physically the effect of the environment on the organism; the term “objective” to what is physically the reacting of the organism on the environment.
Hughlings-Jackson’s concept of subjective consciousness is akin to the if-then representation of mental rules.One needs to perceive the stimuli as clearly as possible and to represent them along with their associated actions so that an appropriate response set can be activated to respond to the environment. His object consciousness is the attentional mechanism that is needed to narrow down the options and focus on those mental representations and responses that are to be selected and used for interacting with the environment.
As per him, subject and object consciousness arise form a need to represent the sensations (stimuli) and movements (responses) respectively and this need is apparent if our stimulus-response and response-reinforcement mappings have to be taken into account for determining appropriate action.
All nervous centres represent or re-represent impressions andmovements. The highest centres are those which form the anatomicalsubstrata of consciousness, and they differ from the lower centresin compound degree only. They represent over again, but in morenumerous combinations, in greater complexity, specialty, andmultiplicity of associations, the very same impressions andmovements which the lower, and through them the lowest, centresrepresent.
He had postulated that temporal lobe epilepsy involves a loss in objective consciousness (leading to automatic movements as opposed to voluntary movements that are as per a schedule and do not happen continuously) and a increase in subjective consciousness ( leading to feelings like deja-vu or over-consciousness in which every stimuli seems familiar and triggers the same response set and nothing seems novel – the dreamy state). These he described as the positive and negative symptoms or deficits associated with an epileptic episode. It is interesting to note that one of the positive symptom he describes of epilepsy, that is associated with subjective consciousness of third degree, is ‘Mania’ : the same label that Minsky uses for a Critic in his sixth self-consciousness thinking level of thinking. The critic Minsky lists is :
Self-Conscious Critics. Some assessments may even affect one’s current image of oneself, and this can affect one’s overall state:
None of my goals seem valuable. (Depression.) I’m losing track of what I am doing. (Confusion.)
I can achieve any goal I like! (Mania.) I could lose my job if I fail at this. (Anxiety.)
Would my friends approve of this? (Insecurity.)
Interesting to note that this Critic or subjective appraisal of the problem in terms of Mania can lead to a subjective consciousness that is characterized as Mania.
If Hughlings-Jackson has been able to study epilepsy correctly and has been able to make some valid inferences, then this may tell us a lot about how we respond flexibly to novel/ familiar situations and how the internal complexity that is required to ensure flexible behavior, leads to representational needs in brain, that might lead to the necessity of consciousness.
As per this study, as we have normally only seen a yellow banana and that color association is quite strong in our minds, hence when we perceive a ‘different’ colored banana, we are bound to see it more yellowish than is the actual hue in which the different color banana is presented.
Basically, they used 2 extremely good experiments that show that when viewing a banana (which is generally yellow), the yellow color perception is automatically activated in our brains: thus a gray matched banana would appear yellowish; while the task that requires matching a pink banana to a gray background would result in a bluish-gray banana, as blue is the opponent color for yellow and blue is added to the background gray to compensate for the memory-activated yellow color perception.
It is interesting to draw parallels here with the stroop test. In this test, color words like ‘red’, ‘yellow’ etc also appear to invoke automatic activation of the corresponding color in the brain and thus interferes with the correct naming of the actual color in which the color word is presented. Developing Intelligence has a very interesting and promising post, in which he explores the current research and computation models, that seem to suggest that the mechanism underlying stroop interference is not directed inhibition of prepotent responses, but lateral excitation among color and linguistic perception modules, with color perception area of the brain being always activated when a color linguistic term is presented and in the incongruent trials more activation seen in this to-be-ignored module as the conflicting activations of color – one due to the actual color of the word and the other due to the color perception activated by the linguistic color word (‘red’ ) both competing against each other lead to more activation. This is in contrast to the view that the more activation is due to directed inhibition . The new explanation advocated seems also to fit with the brain anatomy, with there being only local inhibition processes and is reconcilable with a lack of long range inhibiting pathways in the neocortex.
Thus to me, it seems more and more possible that stroop effect may be due to actual ‘yellowish’ hue perception in brain on watching the linguistic term ‘yellow’. I know that the two examples are not the same– a yellow banana actually has yellow color and thus its memory may affect the perception of a strange colored banana; but maybe the ‘yellow’ linguistic term is also somehow related in our mind very strongly with actual yellow hue perception and maybe we are all synaesthetic to the extent that all of us literally see the linguistic color terms in color rather than in black-and-white (or whatever the text color).
There have been various claims about the ability of language to shape thought and perception, and one of the oft-cited phenomenon supporting this sapir-whorf hypothesis is the evolution of color terms in languages and how the lack of a color term in a language may influence the ability of that language user to make categorical distinctions between colors or to perceive the differing colors.
The basic color terms were originally proposed by Berlin and Kay (1969) in their seminal study ‘Basic Color Terms, their Universality and Evolution’ in which they proposed that different languages (written/ oral) have evolved to differing levels and that a culture would start with only two color terms, equivalent to black and white or dark and light, before adding subsequent colors closely in the order of red; green and yellow; blue; brown; and orange, pink, purple, and gray. Based on this they proposed a grouping of the ninety-eight languages studied into seven stages of an evolutionary sequence running from primitive languages with words only for WHITE and BLACK to more advanced languages with words for the whole range of colors.
STAGE I : WHITE BLACK: Nine languages:7 New Guinea 1 Congo 1 South India
STAGE II: WHITE BLACK RED: Twenty-one languages:2 Amerindian 16 African 1 Pacific 1 Australian Aboriginal 1 South India
STAGE IIIa: WHITE BLACK RED GREEN: Eight languages:6 African 1 Philippine 1 New Guinea
STAGE IlIb: WHITE BLACK RED YELLOW:Nine languages:2 Australian Aboriginal 1 Philippine 3 Polynesian 1 Greek (Homeric) 2 African
STAGE IV: WHITE BLACK RED GREEN YELLOW:Eighteen languages:12 Amerindian 1 Sumatra 4 African 1 Eskimo 380
STAGE V: WHITE BLACK RED GREEN YELLOW BLUE:Eight languages:5 African 1 Chinese 1 Philippine 1 South India
STAGE VI : WHITE BLACK RED GREEN YELLOW BLUE BROWN:Five languages:2 African 1 Sumatra 1 South India 1 Amerindian
STAGE VII: COMPLETE ARRAY OF COLORS :Twenty languages: 1 Arabic 2 Malayan 6 European 1 Chinese 1 Indian 2 African 1 Hebrew 1 Japanese 1 Korean 2 South East Asian 1 Amerindian 1 Philippine
This schema of classification has been revisited in light of recent research, mostly the World Color Survey, and Kay and Maffi (1999), in Color Appearance and the Emergence and Evolution of Basic Color Lexicons, discuss the results to come up with a five stage developmental model of languages based on black, white, red, yellow, green, blue terms only and leave from the analysis other basic terms like brown, orange, purple and pink.
Their stages of languages are essentially the same as that of Berlin and Kay with stage IIIA (White, black, red, green) being more conman than stage IIIB (White, black, red, yellow) in the stage III languages.
Cognitive Daily ran a recent commentary on the World Color Survey , and as per the analysis presented there, it is apparent that the 41 languages covered there belonged to the stage V languages and the rest 69 languages belonged to stage IV languages (and in these languages as no separate word for Blue is present, hence the blue-green color is perceived as same and also labeled as Grue i.e. Blue and green are confused. The results that across cultures, people, if they have a term for a particular color in their language, then they do agree to the actual color hue that the color term corresponds to, across cultures, is a strong argument in favor of universality of color categories. thus, the blue of one language is the same as the blue of the other language and this is most probably due to the underlying physiology. See my blog posts related to color perception in humans in this regard.
Conversely, the fact that those languages that had no term for blue (but only had a common term Grue for blue and green), also found it difficult to distinguish between blue and green hues, suggests that having a term for a color does influence the way in which we categorize the colors and possibly also the way we perceive them. The latter (influence on perception) may be a more controversial claim, but the fact that color terms affect cognition (categorization) is relatively uncontroversial.
It is instructive to pause here, and note some facts from color vision physiology. The rods give us an ability to see even in dark and may have been the first to evolve, giving us the concepts of black and white. The cones may have evolved later to give a sense of colors. The opponent process utilizing Red cones and green cones gives rise to the perception of colors Red and Green. It is plausible that first the Red cones evolved (in evolutionary time-frame), giving a Red signal and thus a Red qualia/ Red color term. Later came the green cones to give a green signal and a green qualia/ green color term. The R+G opponent process was born later and refined the perception of Red and Green. It is also plausible that the brain started combining Red and Green signal (R+G) to perceive Yellow. Thus , a perception of Red, Green and Yellow would be generated by the brain, based on the two Red and Green cones only. The R+G =Y signal does exist in the brain and is one of the signals involved in the opponent processes of Blue-Yellow perception. The Blue cones apparently came the last and using the signal from blue cone and the Y=R+G signal, the opponent process of Blue-Yellow perception enabled, the perception of Blue qualia too and a corresponding color term for Blue too. Further, it is instructive to note that brown color (the stage V to stage VI transition of languages based on color terms) is perceived in the brain by a complex process involving signals from both R-G and B-Y opponent processes (specifically mixing of Red and Yellow at a point in space to give orange) and comparing and contrasting this information with the intensity (Black-white achromatic signal) of the surrounding region. This, leap from opponent-processes to a perception based on contrast with surrounding areas, marks a significant leap ( as is common in all developmental stage VI transformations) in perceptual mechanism employed and correspondingly the terms for Brown are more rare and difficult to be claimed as being universal in all languages and must have evolved later. The stage VII and VIII perceptual processes may determine how we perceive purple, pink, orange and gray but a more physiological analysis of perceptual mechanism involve would have to wait for another day, and by another more informed vision researcher. Here it would suffice to note that there are sound physiological reasons for why the color terms may have evolved in the way did over historical and evolutionary time scales and how some modern languages may still not be having terms for some colors the ability to distinguish which might have evolved recently and based on the different perceptual processes involved may not be the same in all cultures.
Before speculating further, it would serve us well to get acquainted with the latest consensus regarding the color terms and what they inform us regarding language and cognition. Kay and Regeir (2005) in their articleLanguage, thought, and color: Recent developments, TICS , aptly summarize the state of the art view that involves an interactionist view where both Nature and Nurture, Universalism and Relativism have their place and are involved. As per them,
The language-and-thought debate in the color domain has been framed by two questions: 1. Is color naming across languages largely a matter of arbitrary linguistic convention? 2. Do cross-language differences in color naming cause corresponding differences in color cognition?
In the standard rhetoric of the debate, a ‘relativist’ argues that both answers are Yes, and a ‘universalist’ that both are No. However, a number of recent studies, when viewed in aggregate, undermine these traditional stances. These studies suggest instead that there are universal tendencies in color naming (i.e. No to question 1) but that naming differences across languages do cause differences in color cognition (i.e. Yes to question 2).
We have already seen how the concept of Focal colors (as outlined by Kay) is valid and seems to constitute a universal cognitive basis for both color language and color memory. Further, we have seen some neuro-physiological support for the emergence of focal colors red, yellow, green, blue and brown. Jameson and D’Andrade have argued that the universal focal colors are salience maxima in color space and that universals of color naming flow from a process that partitions color space in a way that maximizes information. A recent study by Griffin LD (2006), The Basic Colour Categories are optimal for classification. J Roy Soc: Interface 3(6):71-85, seems to support this hypothesis and posits that the eleven basic color categories identified by Kay are optimal and useful in computer machine vision too. All these evidences are compatible with each other and suggest that the basic properties and number of color categories, compatible with optimal color space partitioning, have led to the emergence of corresponding neuro-physiological/ perceptual apparatus in humans to detect these categories, and has thus led to that many number of color terms to evolve in the degree of complexity of these mechanisms/ incremental advantage they provide in categorization.
On the relativistic side it is claimed, that the cognitive variables like privileged memory, similarity judgments, or paired associates learning for focal colors are well predicted by the boundaries of each language’s color categories: a form of categorical perception of color. Since these boundaries vary across languages, speakers of different languages apprehend color differently. Moreover, these linguistic differences seem to actually cause, rather than merely correlate with, cognitive differences.The argument is further that color terms are arbitrary and the color terms determine the perception of colors absolutely. Roberson, Davidoff et al, in Color Categories are not universal: New evidence from Traditional and Western cultures , argue that the evidence supporting focal colors and the concept of universal categorical perception arising from them, . viz privileged memory for them or paired associate learning for the proposed universal colors, is rendered incorrect, when the effect of verbalization (or use of linguistic tokens) is taken into account. As per them (emphasis mine) :
In native English speakers a series of experiments found that verbal interference selectively removed the defining features of Categorical Perception. Under verbal interference, there was no longer the greater accuracy normally observed for cross-category judgments compared to within-category judgments. It thus appears that while both visual and verbal codes may be employed in the recognition memory of colors, subjects only make use of verbal coding when demonstrating Categorical Perception (Roberson & Davidoff, 2000). In a brain-damaged patient suffering from a naming disorder, the loss of labels radically impaired his ability to categorize colors
Participants from a traditional hunter-gatherer culture, whose language contains five basic color terms (under the definition of Kay Berlin & Merrifield, 1991), showed no tendency towards a cognitive organization of color resembling that of English speakers. They did not find best examples of English color categories easier to learn or remember than poor examples and, in a further set of experiments, evidence of Categorical Perception was found in both languages, but only at their own linguistic category boundaries.
Although the authors draw extreme conclusions from their findings, but Kay moderates the viewpoint and concludes: (emphasis mine)
It has been widely assumed that language is the cause of color categorical perception. This is suggested since – as we have seen – named category boundaries vary across languages, and categorical perception varies with them. However, Franklin and Davies have found startling evidence of categorical perception at some of these same boundaries in pre-linguistic infants and toddlers of several languages. Thus, some categorical color distinctions apparently exist prior to language, and may then be reinforced, modulated, or eliminated by learning a particular language.
This finally brings us to the post by Developing Intelligence regarding labels as an accelerator of ontological development. In this, though in the beginning itself, Chris dismisses the strong form of Sapir-Whorf hypothesis (esp. in relation to colors) , he presnts a study that leads to a reasonable conclusion that language can accelerate the process of sortal/kind discrimination, such that a skill normally only demonstrated by 12-month-olds was in this case demonstrated by 9-month-olds with the proper linguistic input. Here, one is not arguing that the sortal/kind discrimination would not have been possible in the absence of linguistic inputs- one is merely claiming that the sortal/kind discrimination is facilitated by language and happens early in the developmental cycle based on linguistic labels. And definitely not having labels leads to a different cognitive/ perceptual experience in the infants as compared to those infants who use labels and can make the sortal/kind discrimination.
Form the above, it may be inferred, that though universal focal colors and color categories do exist (based on underlying neurophysiology or spectral properties of the visible-to-humans world), they may be available to consciousness at different stages of an infant’s (or a culture or a language’s ) development, and having labels or color terms for the categories may facilitate an early maturation of the color categorizations faculty. Depending on where a culture, or language is on its developmental path, lack of proper color terms may limit their ability to perceive colors as belonging to different categories for which they don’t have a label.
Interestingly, in the Davidoff study, a brain damaged patient suffering from an inability to label things, was impaired in categorizing colors.
Though the exact mechanism by which labels or color terms may work is still elusive with multiple competing hypothesis (viz., labels facilitate sortal/kind distinctions by aiding a domain-general, non-linguistic process, such as memory; or that labels increase the salience of perceptual feature differences between object) , yet it is clear that labels are instrumental and play a definitive role in the ontological development of the child.
One may take a strong line and argue, that in the absence of color terms or labels, one would not be able to have a full cognitive color categorization or sortal/kind discrimination experience, but even if one does not subscribe to the extreme view, it seems plausible that different developmental levels of languages identified by the linguistic color terms in the languages correspond to different levels of cognitive experiences that are more readily available in the corresponding culture.
Thus, while language does affect thought and vice versa, both may be constrained by the developmental stage at which a culture is. The cognitive experience and the cognitive developmental stage from which that experience results would correspond to the stage of development of that language and vice versa. Thus, some cultures, by not using a language that is fully evolved/ developed, may not be experiencing the full range of cognition and emotion that is humanly possible. Conversely, based on the linguistic devices utilized by a culture, their cognitive experiences may differ from another culture that utilizes another incompatible set of linguistic devices.
In an interesting study, it has been found that high BMI (or excess body weight) in middle-aged adults is linked to cognitive decline. Though the experts have been focusing on a physical causal relationship (mediated by effects of lack of physical exercise on blood vessels / insulin), another plausible hypothesis is that those who have the personality attributes that dispose them towards laziness and a propensity towards lack of physical exertion/ exercise, may similarly be disinclined to use their cognitive capacities to the fullest and exhibit mental laziness too. As the evidence for ‘use it or lose it’ in relation to cognitive capacities is mounting, the ‘lazy’/ ‘careless’/ ‘challenging’ attitude may be the underlying factor reflected in both physical decline (obesity) as well as cognitive decline.
A brain fitness movement currently seems to be gaining momentum, and a new blog SharpBrains has expertise in precisely that niche. They are running a survey and you can let the authors know what content you will like to be featured more on that site. Exercise your brain to the fullest, but don’t neglect that good old physical regimen, as it may have a determining effect too.
Also during this decade Sir Charles Sherrington described the junction between nerve and muscle, and named it the ’synapse’ (from the Greek roots syn, meaning ‘together,’ and haptein, meaning ‘to clasp’) in 1897.
In Depth: If you want to learn more about attentional blink and whether the data can be explained by distracter-interference vs. two-stage bandwidth limited models, then join Chris from Developing Intelligence as he explores the phenomenon in depth.
There are some articles online by Lakoff, that pertain to the Conceptual Metaphor theory and are a must read for anyone intrigued by that figure of speech called Metaphor. For a layman, Metaphor is when a literal reading of a sentence/phrase has to be abandoned and the utterance understood ‘figuratively’. This definition may be more appropriate to the Novel metaphors / image metaphors that rely more on conjuring up image-schemas to make sense. The ‘figure of speech’ or ‘figurative speech’ descriptions may themselves be part of the conventional metaphor “LANGUAGE IS DEPICTION” and are explained by mappings between language: an abstract target domain being mapped to a more concrete source domain of (cave art) symbolic depictions/illustrations. While some concepts would be represented by symbols in the source domain of art representation, others would be not be representational, but based on form of figure would be equivalent to actual physical objects (hieroglyphics). Thus, the very definition of (novel) metaphor is grounded in Conceptual metaphor theory.
Lets us start with an example of metaphorical mapping given by Lakoff: ” LOVE IS A JOURNEY” with the metaphorical mapping deconstructed as (emphasis mine)
-The lovers correspond to travelers.
-The love relationship corresponds to the vehicle.
-The lovers’ common goals correspond to their common destinations on the journey.
-Difficulties in the relationship correspond to impediments to travel.
Although I would have preferred to frame the “LOVE IS A JOURNEY” AS “LOVE IS A VOYAGE (OF DISCOVERY)” so as to remove the burden of having a well defined destination as a goal for the journey by a relatively carefree discovery (about each other) as the destination/goal of Love, yet, in keeping with the “LOVE IS JOURNEY” metaphor it is instructive to note that the VEHICLE (of source domain) is mapped to relationship (of target domain) and the word relationship contains “ship” a popular vehicle for traversing difficult terrains like the sea. More interestingly, many similar associated words like friendship, courtship, companionship too have the word ‘ship’ embedded in them.
To elaborate, while “relationship” to “vehicle” mapping is present in the “LOVE IS JOURNEY” metaphor, the mapping is of superordinates in the sense that the “VEHICLE” itself is abstract and can be a ship, a car, a boat; also while Lakoff doesn’t mention this, the relationship can be substituted by companionship/ friendship in case of some other related metaphors like “FRIENDSHIP IS A JOURNEY”. What Lakoff does discuss is some sort of inheritance hierarchy whereby the structure of a base metaphorical mapping like “PURPOSIVE LIFE IS A JOURNEY” is inherited by a derived metaphors like “LOVE (LIFE OF TWO) IS A JOURNEY” or “CAREER (upward purposive) IS A JOURNEY”.
To have more clarity on the ‘conceptual’ part of the conceptual metaphor theory consider metaphors that we normally use for some concepts like time (already discussed earlier in one of the posts), quantity, quality, category etc.
The first of these semantic concepts is “CLASSICAL CATEGORIES ARE CONTAINERS” metaphor. Here, an item (object) can be either ‘in’ a category (container) or outside of that category( container). Of course a third possibility exists that the item “is and is not” in that category(is on the surface of the container), but this is not discussed by Lakoff.
The other mappings like “QUANTITY” of an object is spatial direction “UP” is based on the 3-D internal representation of Cartesian space and relies on our commonsense concrete observations like a pile grows in upward direction when more quantity is added, or that a fluid in a container rises up when more liquid is poured in. Thus we have statements like ‘the crime graph soared up while the economy dwindled.’
The “QUALITY” of an object (or linear scales measuring it) is a “PATH” metaphor, again uses the underlying structure of path whereby the movement is in front direction (possibly radial direction) direction and is based on the fact that distance in the radial direction is equivalent to more or less of a quality. Thus, statements like ‘in terms of Intelligence he is way ahead of you’. It is interesting to note that PATH metaphors rely on angular geometry concept (with the travelers or subjective origin) always present implicitly in the metaphor.
Of the first of these dual Time metaphors exhibiting duality of object/landscape, time is motion of object assumes that Future Time is (someone/thing personified) coming towards us and past time is receding from us. This leads to expressions like The time will come when… The time has long since gone when … The time for action has arrived. That time is here. In the weeks following next Tuesday…. On the preceding day, … I’m looking ahead to Christmas. Thanksgiving is coming up on us. Let’s put all that behind us. I can’t face the future. Time is flying by. The time has passed when etc
It is instructive to note that Aymara have a reverse metaphor , whereby their backs are towards future. Logically this makes more sense as FUTURE is not visible to us (unless we have good predictive powers ) and so should come from behind us and surprise us, while past is there for us to see till eternity and should be in front of us. Anyway, this metaphor representation too represents the TIME as linear motion. What is more interesting concept is that of time as circular (and thus periodic/ rhythmic) Motion. The interesting metaphor here would be standing close to a merry go round and watching events flow past oneself. Here too differences can arise based on whether one is watching things in counter-clockwise motion or clockwise motion. Interesting to note that many concepts related t time are circular(spherical/ rhythmic) in nature and even concepts of clockwise rely on concept of clock/time.
The other metaphor for time is TIME AS MOTION OVER A LANDSCAPE. This I believe is no different from first one,except in the sense that it relies more heavily on “NO MOTION” . Thus when the passage of time does not lead to any noticeable changes (CHANGE IS MOTION), then one may be apt to treat the time as a location. The examples given corroborate this.
He stayed there for ten years.
He stayed there a long time.
His stay in Russia extended over many years.
He passed the time happily.
I’ll be there in a minute.
Even the last example illustrates that not much will happen in the ‘minute’ and thus minute is exemplified as a location/container.
A very important metaphorical mapping discussed is of EVENT structure. The EVENT domain is mapped to basic concrete domains of space, motion and forces. Here,
States are locations (bounded regions in space).
Changes are movements (into or out of bounded regions).
Causes are forces.
Actions are self-propelled movements.
Purposes are destinations.
Means are paths (to destinations).
Difficulties are impediments to motion.
Expected progress is a travel schedule; a schedule is a virtual traveler, who reaches pre-arranged destinations at pre-arranged times.
External events are large, moving objects.
Long term, purposeful activities are journeys
I would like to distribute this in my 8-fold path with the first five of these describe the event in terms of the entities involved. The next 3 in terms of the context or environment in which the event happens.
States are confinements of space.
Changes are movements
Causes attributed are underlying forces amongst the objects/ force field.
Outward Observable Actions are equivalent to self-propelled motion with no observable external cause
Purpose or reason for the event is mapped to there being destinations or goals.
Means used to achieve the event-happening is mapped to there being paths (multiple) for the purported destination and choosing of one path over others.
There are 3 factors affecting outcome when one means(path) is chosen- difficulties mapped to impediments to motion in the path; subjective assessment of progress mapped to scheduled milestones in the path; and unpredictable and outside control other (synchronous) events mapped to external large moving bodies ( that may curve the time space). It also interesting to note that large , moving objects are conceptualized in terms of Things, Fluids and Horses ( in the last of which balance is required to control the motion).
Finally, The events that are meaningful (have purpose and right means etc) and are extended are equated to Journeys or voyages though time-space.
Lakoff also maps this event structure to duality of object-location whereby events may be attributes possessed or happening in a location (space time). Thus, one can either be ‘in trouble’ or ‘have trouble’. In the former case one is conceptualizing the event (trouble) as being confinement in some space-time that is associated with trouble. In the latter, one is conceptualizing trouble as a possession or attribute that one has. In my view the right framing is one that uses location metaphors as that is more related to paths, journeys etc. rather than object metaphors which necessarily signify events (even related to other persons) as objects of gratification.
While time is sometimes personified while doing CMT, another interesting case is that of DEATH usually personified as a drivers etc. This bodes well with other metaphors like BODY being a VEHICLE/CONTAINER for traversing this sea of life and transcending to other other end. The death personified serves as a driver taking one from life domain to the other transcendental domain. No surprise in MATRIX revolutions, Sati meets NEO while the DEATH driver for the train is coming to take NEO to the underworld (of death).
Before closing would like to add a few notes on poetical metaphor or Novel metaphor (which will deserve their own posting). I believe they involve conjuring of actual images in the mind to work and are slightly different from conventional metaphor. They may in time become entrenched and lead to conventional metaphor.
Jean Piaget had initially proposed that something akin to theory-of-mind develops in the children quite late and they have difficulty seeing things from another person’s perspective. The 2 most comment methods used to study this are false-belief tests and the sight-of-view-from-another-person’s-perspective tests.
A recent insightful article on Cognitive Daily elaborate on the recent work that has been done on the second theory-of-mind test viz. the-sight-of-view tests. Please do read the article for details and some pictures used in the actual experimental setup
To quote the end conclusion of the article (emphasis added): >> Michelon and Zacks argue that these experiments offer substantial evidence that we use at least two different methods to understand the perspective of others. When we are trying to decide whether someone else can see what we can see, these experiments suggest that we use the line-tracing method, but when we’re trying to understand the relative positions of objects, we use the more cognitively demanding perspective-taking approach. >>
Now this conclusion when seen in the light of my earlier mails regarding Cognitive Maps and different models for Space like 3-D linear system, or R,theta,phi angular system induces one to stretch boundaries of analogies further and speculate that when one uses the Cartesian 3-D space metaphor, one may not necessarily need to put oneself in the place of another (as the origin in such systems are arbitrary) and one can trace the line from the other person to the target object and use trace-line mechanism to answer; but when one is forced to answer about left-right distinctions (that necessitate that if angular geometry is used then we have to distinguish between clockwise and anti-clockwise motion…and this may be with reference to origin…in most cases by ourselves as the origin), then the nature of task (making left-right distinctions) literally necessitate that one puts oneself in the place of the other person, and use angular geometry concepts to answer and this may take more time-to-respond as one has to literally rotate one’s frames of reference to align at the new origin (that of the other person).
Interesting line of thought and more evidence regarding the validity of Cognitive Map approach and conclusions derived from it.
Endgame: To give a linguistic twist (and include the determining sets concepts), would the distinction between right-wrong actions of a person require us to literally put in the other person’s shoe…and use angular geometry concepts?