cognitive map

Multiple Cognitive Maps: how they are kept distinct

Readers of this blog will remember a study that had shown that there were three dissociable systems in the human hippocampal regions as relevant to declarative memory. These were the anterior hippocampus (dentate gyrus) for detecting novelty; the Posterior hippocampus (CA3 )for recollecting (or using contextual cues for recall) and the posterior hippocamal gyrus for familiarity detection. Extending these to spatial memory , one can conjecture that dentate gyrus would be involved in detecting a novel cognitive map or spatial arrangement from the older stored cognitive maps; the CA3 region will actually store these cognitive maps that provide the context using which the mice (or men ) can orient oneself; while the posterior hippocampal gyrus might be involved in detecting familiarity or whether the spatial place has been visited earlier and is familiar.

Research has indicated that indeed the CA3 region contains ‘ place cells ‘ or cells that fire when a mice is near a spatial location. Multiple such cognitive maps of the environment that the mice encounters can be stored in the hippocampus.

However, as Madam Fathom has excellently elaborated, there persisted a mystery as to how widely similar, but subtly distinct cognitive maps , were distinguished within the hippocampus. As per the above model, dentate gyrus should have a prominent role to play here detecting if a new spatial location is a novel spatial location, despite it being similar in many ways to an earlier encountered spatial location.

This is exactly what has been experimentally observed. When mice which had NMDA receptors knocked off in the dentate gyrus were put in a novel environment or context, they were unable to distinguish it from the previously learned context. Thus, these mice though capable of learning could not distinguish between contexts, as presumably their ability to detect a novel context were hampered.

To me this bodes as further evidence for the cognitive map theory and I would stick my neck and say that the mechanisms and circuits involved in spatial navigation, episodic and declarative memory are same and serve a similar function. Thus, the dentate gyrus not only detects novel words in a word list (declarative memory) , but also detects novel spatial locations (cognitive maps) and novel autobiographical events (episodic memory). Similarly the CA3 region of hippocampus codes for distinct spatial maps and distinct words an facts and also distinct autobiographical memories. Similarly posterior hippocampul gyrus may detect familiarity for both facts, episodic memories (and trouble with this may lead to Deja Vu like feelings) and spatial locations.

These multiplexed use of the same brain regions, for different types of memories, may also explain why mnemonic methods like the method of loci work excellently- as the brain regions for declarative memory are the same as for discerning one’s spatial location in an environment- hence it might be computationally easy to remember lists if a associated with spatial locations or a prominent cognitive map.

Depression, Neurogenesis and Spatial navigation

We all know that hippocampus is the seat of both memory as well as spatial abilities (cognitive map theory). We also know that most of the neurogeneisis in adult humans happens in hippocmapus. We also know that depression is caused by stress and both stress and depression lead to or are correlated with reduced neurogeneisis in the hippocmapus (my learning helplessness theory of depression) .

Now a new study has found that depressed people have impaired spatial navigation abilities. Putting 2 and 2 together it is highly plausible that this relationship between depression and impaired spatial navigation is mediated by the reduced neurogeneies or atrophy in hippocampus.

Relatedly, a good article (pdf) regarding how new anti-depressants are targeting neurogenesis in hippocampus as a mechanism to alleviate depression.

Three cheers to the cognitive map theory- the focus with which this blog started!!

Hat Tip: BPS Research Digest

The courage of a mouse to say ‘No’: A case of metacognition or risk-aversion?

A recent article in Current Biology by Foote et al (courtsey Ars Technica) posits that rats have metacognition abilities. till now only Humans and primates were assumed to have metacognitive abilities. One feature or defining characteristic of metacognition is knowing what you know and also knowing what you don’t know. It means one can think about one’s own mental states and determine what knowledge one already has and what knowledge one has not yet learned. So a related ability would be the ability to decline a test of knowledge if one thinks that one has not learned enough to ace the test. For those who gave GRE/ any other exam recently and maybe postponed that exam, they would have no difficulty appreciating this that postponing/declining a test involves metacognition.

Taking this line of reasoning further, Foote et al surmise that if a rat could decline a test, under conditions when the rat was not sure of its learned knowledge regarding the test and doubted its ability to successfully complete the test, then such a declining behavior would indicate that the rat has metacognitive abilities. I find no flaws in this reasoning, but have a few quips about their particular experimental setup, which may have confounded the results by not factoring in the risk aversion.

First regarding their hypothesis of the experiment:

Here, we demonstrate for the first time that rats are capable of metacognition—i.e., they know when they do not know the answer in a duration-discrimination test. Before taking the duration test, rats were given the opportunity to decline the test. On other trials, they were not given the option to decline the test. Accurate performance on the duration test yielded a large reward, whereas inaccurate performance resulted in no reward. Declining a test yielded a small but guaranteed reward. If rats possess knowledge regarding whether they know the answer to the test, they would be expected to decline most frequently on difficult tests and show lowest accuracy on difficult tests that cannot be declined [4]. Our data provide evidence for both predictions and suggest that a nonprimate has knowledge of its own cognitive state.

Now on to the actual experimental setup:

Each trial consisted of three phases: study, choice, and test phases (Figure 1). In the study phase, a brief noise was presented for the subject to classify as short (2–3.62 s) or long (4.42–8 s). Stimuli with intermediate durations (e.g., 3.62 and 4.42 s) are most difficult to classify as short or long [11, 12]. By contrast, more widely spaced intervals (e.g., 2 and 8 s) are easiest to classify. In the choice phase, the rat was sometimes presented with two response options, signaled by the illumination of two nose-poke apertures. On these choice-test trials, a response in one of these apertures (referred to as a take-the-test response) led to the insertion of two response levers in the subsequent test phase; one lever was designated as the correct response after a short noise, and the other lever was designated as the correct response after a long noise. The other aperture (referred to as the decline-the-test response) led to the omission of the duration test. On other trials in the choice phase, the rat was presented with only one response option; on these forced-test trials, the rat was required to select the aperture that led to the duration test (i.e., the option to decline the test was not available), and this was followed by the duration test. In the test phase, a correct lever press with respect to the duration discrimination produced a large reward of six pellets; an incorrect lever press produced no reward. A decline response (provided that this option was, indeed, available) led to a guaranteed but smaller reward of three pellets.

The test they have used is a stimulus discrimination test. Their results indicated that indeed the rats declined more often on difficult trials (trials in which the stimulus were closely spaced around the men of 4s) as compared to easy trial (in which they had to discriminate widely spaced stimulus (say 2s and 8s). This neatly demonstrates that the rats were internally calculating what their odds of passing the test were, and in case of the difficult test they took the better option of choosing the decline-the-test condition. However I would like to see more of their data and factor out the effcets of risk aversion.

We all know that humans are prone to risk aversion. That is if I present to you an option of choosing a sure amount of 100 rs or a 50% chance of winning 200rs , you would normally choose the fist option, though if one compares the utility function it is the same. In first case you have and expected value of 100 and in the second case too you have an expected value of 100 (0.5*0 +0.5*200). Thus it doesnt make much sense why one would use one over the other. This becomes more interseting when we increase the amount of the risky option. suppose we now have 100 rs assured vis-avis a 50 % chance of 300 rs still , most of us end up choosing the assured sum.

In this setup the utility of declining the test is 3 pellets; while if we assume that the rats have not learned how to discriminate the stimuli; then assuming that they press the levers at random and thus each option of the test condition is equally probable we have the utility as 0.5 *0 +0.5 *6 = 3 pellets. so we have the same situations as with humans. Now taking risk aversion into account, one would find that the rats would decline the test more often in the difficult stimulus conditions as that is a safe and assured option as compared to the take-the-test condition. As a matter of fact I am surprised that there were some rats who did choose the take-the-test condition. I guess men are more meek than mice!!

So the best thing to do would be to take risk-aversion into account and then after factoring it out decide on whether the rats knew (in a conscious sense) that the test is difficult. Risk aversion is mostly sub-conscious and would not involve metacognition. However, the trend of rising declining behaviors with test difficulty does point to the fact that the rats did have some metacognition.

I would love to have this study replicated using a maze (mouse trap sort of) task. In a amze the cognitive map of the maze provides a good indicataor of how much the mice know about the test/ test difficulty and measuring the declining in this case may be directly related to their meta-cognitive abilities.

Encephalon #10: A treat for your mind!

The latest edition of encephalon, the brain carnival, has just been published by Bora at A Blog Around The Clock.

It is a truly outstanding issue highlighting some of the best cognitive posts on the web.

My favorite picks are Gene Expression’s excellent summary of the current view of Hippocampal formation as a memory consolidator and also as representing spatio-temporal information in the form of Cognitive Maps.

The readers of this blog will remember that this blog started with a cognitive map focus and it is heartening to see how the place and grid cell systems discovered in Hippcoampus may contribute to the different dissociated memory areas hypothesized in the hippocampus regarding novelty and similarity (recollection) memory retrieval. Incidentally, the novelty related area, found using fMRI, was the rhinal cortices and the grid cells are also found there! I would write a detailed mail linking everything up, but for now you may want to savor the other great articles in the Encephalon- another favorite being the exploration of peripersonal space in neglect patients by Michael.

Causal learning: how different is it from normal learning?

I was browsing a write-up on Causal reasoning by Mixing Memory, and came across this article by Lagnado et al, regarding the Causal Structure underlying causal reasoning.

In brief , Causal reasoning refers to that ability of the humans by which they can classify some events as causes and some events as effects and also determine either deterministically or probabilistically as to which effects are caused by which causes. In simple words, the ability to assign causes to effects.

Historically, Causal reasoning has focused on the statistical methods of covariance or correlation between two events and used the strength of the correlation to calculate and predict the causal relation between the two events. This suffers from several drawbacks like inability to determine the direction of causation or the inability to rule out a third common cause of which the two observed events are the effects.

Langrado et al, in their paper, present a refreshing new perspective on causal reasoning by differentiating between the qualitative Causal Structure between two or more events and the quantitative Causal Strength of that relationship. For example, a causal structure may causally relate the presence of fever with bacterial infection thus identifying bacterial infection as a cause of fever; but the causal strength between bacterial infection and fever would determine what probability we assign to a particular case of fever to have been caused due to bacterial infection (diagnostic learning) or the probability that given bacterial infection a person would develop fever (predictive learning).

The authors contend that the issues involved in causal strength learning and causal structure learning are different and should be addressed differently. Further, they contend that most of the historical research has been limited to causal strength learning, ignoring the prior and more fundamental stage of causal structure learning; as in their theory, the causal strength of any relation can only be learned once one has some a priori qualitative assumptions about the underlying causal relationships. Their paper thus focuses what cues/mechanisms are involved in the formation of the causal structure.

Causal-model theory was a relatively early, qualitative attempt to capture the distinction between structure and strength. According to this proposal causal induction is guided by top-down assumptions about the structure of causal models. These hypothetical causal models guide the processing of the learning input. The basic idea behind this approach is that we rarely encounter a causal learning situation in which we do not have some intuitions about basic causal features, such as whether an event is a potential cause or effect. If, for example, the task is to press a button and observe a light, we may not know whether these events are causally related or not, but we assume that the button is a potential cause and the light is a potential effect. Once a hypothetical causal model is in place, we can start estimating causal strength by observing covariation information. The way covariation estimates are computed and interpreted is dependent on the assumed causal model.

They list the cues that humans use to form their Causal structures as

  • Statistical relations
  • Temporal order
  • Intervention
  • Prior knowledge

Before discussing, in depth, each of these cues and how they may affect causal reasoning, it is instructive to note that the concept of a Causal Structure underlying a given set of phenomena is quite close to the idea of a Cognitive Map underlying a given environment (say the maze or the mouse trap). While the latter is a spatial mental map of the objects in the surrounding 3-D space, the former may be conceived as a causal mental map of events in the temporal dimension. The reason I am using this analogy is to contrast the cues used in formulating a Causal structure with the different learning mechanisms used by mice to form a cognitive map of the mouse trap. The contention is that the same cognitive mechanisms are involved and also that these mechanisms are structured and unfold in a developmentally guided and staged manner.

The first cue to form a Causal structure or link two or more events is that of statistical relations. Here, correlation information between the events, as well as their conditional independences are used to arrive at a set of Markov equivalent causal models. Much of the learning is associative, probabilistic and maybe latent. It may not be accessible to consciousness and the learning of causal structure is more implicit, than explicit. For example, the regularities in the data may give rise to a fuzzy causal structure, where tentative causal relations are posited. Suppose from the data, it is determined that A and B are perfectly correlated. The person will have a strong sense of causation between A and B, but would be unable to determine the direction of causation. similarly if 3 events A,B and C are correlated, we would not be able to determine the directions of causation. This mechanism is very much similar to the latent learning mechanism exhibited by the mice in the mouse trap.

The second cue to form a causal structure that we consider here is that of Intervention. Here, human intervention takes place by affecting one of the events (potential cause) and by basis of that intervention or exercised choice, experiment to find out what effect that variable has on the outcome (effect). To more rigorously define Interventions, let me quote from the paper.

Informally, an intervention involves imposing a change on a variable in a causal system from outside the system. A strong intervention is one that sets the variable in question to a particular value, and thus overrides the effects of any other causes of that variable. It does this without directly changing anything else in the system, although of course other variables in the system can change indirectly as a result of changes to the intervened-on variable. What is important for the purposes of causal learning is that an intervention can act as a quasi-experiment, one that eliminates (or reduces) confounds and helps establish the existence of a causal relation between the intervened-on variable and its effects.

Suppose A and B have been found to be correlated. Further suppose that the happening of event A and B is under the control of the human subject. Then one can intervene to cause A and observe whether B occurred. If so the direction of causation is from A -> B. On the other hand if by intervening the human subject caused B to happen and did not observe A, then one could conclude that B does not cause A. To make the example concrete, consider event A as ‘Fire’ and event B as ‘Smoke’. We find that Fire and Smoke are correlated. By intervening and conducting experiments whereby we can control the occurrence of ‘fire’ or ‘smoke’ we can come up with correct causal relation that ‘fire’ -> ‘smoke’

Consider again, a 3 event situation whereby the relation between two causal events (A and B) and an outcome (C) has to be ascertained. Specifically, by intervening and causing A sometimes and B other times, and observing the happening of C we could ascertain the causal structure as to whether A->c or B-> C. The situation is not too different than the vicarious trail and error learning exhibited by a mouse when at a choice point. There, the mice has to, by trail-and error choosing of either right/left black /white turnings, learn which stimulus is associated with food (outcome). Thus, intervention mechanism is nothing but the refined vicarious trial-and-error learning.

The third, and perhaps the most important, mechanism that is used to form the Causal structure is Temporal ordering. This is a very simple mechanism whereby events that are occurring prior to some other event can be the cause of that event, but not vice versa.

The temporal order in which events occur provides a fundamental cue to causal structure. Causes occur before (or possibly simultaneously with) their effects, so if one knows that event A occurs after event B, one can be sure that A is not a cause of B. However, while the temporal order of events can be used to rule out potential causes, it does not provide a sufficient cue to rule them in. Just because events of type B reliably follow events of type A, it does not follow that A causes B. Their regular succession may be explained by a common cause C (e.g., heavy drinking first causes euphoria and only later causes sickness). Thus the temporal order of events is an imperfect cue to causal structure.

This mechanism is the same as the one used by mice in searching for stimulus. When two events follow each other than an active search mechanism is used to identify the salient stimulus which may have been the cause of the event. The concept of temporal ordering implying causation is inherent in this learning mechanism as are concepts of spatial and temporal contiguity and proximity. This is the normal avoidance learning mechanism in mice and in human causal structure learning may be more engaged in and relevant to identifying the causes of events that are undesirable.

The fourth cue used for identifying causal structure, that the authors do not touch on, but do hint in terms of highlighting the importance of causal mechanisms; but that I propose nonetheless, is that of causal chains construction and elaboration. This basically involves breaking the simple A-> B with intermediate and competing C, D, E etc and intervening and conducting experiments to come up with the correct causal chain. Thus, A->B may be refined as A->C->D->B or A-> E->B and experimentation done to narrow down on a particular causal chain.

This is similar to the hypothesis learning involved in mice and depends on a cognitive capacity to sequence events . Also this is normally exhibited in approach behavior and this elaboration of causal chain may be more relevant to the desirable outcomes that human subjects want to happen and all the small intermediate steps of they need to cause to make the final outcome happen.

The fifth, and for now final, cue that is used in the formation of causal structure is prior knowledge. The authors define it as follows:

Regardless of when we observe fever in a patient, our world knowledge tells us that fever is not a cause but rather an effect of an underlying disease. Prior knowledge may be very specific when we have already learned about a causal relation, but prior knowledge can also be abstract and hypothetical. We know that switches can turn on devices even when we do not know about the specific function of a switch in a novel device. Similarly we know that diseases can cause a wide range of symptoms prior to finding out which symptom is caused by which disease. In contrast, rarely do we consider symptoms as possible causes of a disease.

My take on prior knowledge is something close to that, but slightly different. The subject forms a general idea of which events are causes and which effects and also the general relationship between a primary cause and a desired/undesired later final outcome. Though, the intervening small steps of the causal chain may not be present, and thus no formal corroborating data based proof may be there, yet one can deduce the causal relationship between the primal cause and the later final outcome, ignoring the intermediate minor events down the line. A case in point would be food aversion learning, whereby one single vomit following consumption of say a spoiled food that was taken hours ago, may result in a strong automatic association and learning of that food as the cause of vomit and lead to avoidance of (or escape from) that food.

To me this mechanism is the same as that exhibited by the mice when they learn the spatial orientation in the mouse trap and are able to exhibit novel escape learning.

This summarizes the analogy between the causal learning and normal learning as of now. Will touch on the qualitatively different next 3 (causal) learning mechanisms later.

Go to Top