(34 comments, 489 posts)

Sandeep Gautam is a psychology and cognitive neuroscience enthusiast, whose basic grounding is in computer science.

Yahoo Messenger: sandygautam

Jabber/GTalk: sandygautam17

Posts by sandygautam

How Mood and felt Energy are related to thought variability and speed

There is a recent article by Pronin and Jacobs, on the relationship between mood, thought speed and experience of ‘mental motion’ that builds up on their previous work.

Let us see how they describe thought speed and variability and what their hypothesis is:

1. The principle of thought speed. Fast thinking, which involves many thoughts per unit time, generally produces positive affect. Slow thinking, which involves few thoughts per unit time, generally produces less positive affect. At the extremes of thought speed, racing thoughts can elicit feelings of mania, and sluggish thoughts can elicit feelings of depression.

2. The principle of thought variability. Varied thinking generally produces positive affect, whereas repetitive thinking generally produces negative affect. This principle is derived in part from the speed principle: when thoughts are repetitive, thought speed (thoughts per unit time) diminishes. At its extremes, repetitive thinking can elicit feelings of depression (or anxiety), and varied thinking can elicit feelings of mania (or reverie).

Let me clarify at the outset that they are aware of the effects of though speed on variability and vice versa; as well as the effects of mood on felt energy and vice versa; thus they know that one can confound the other. Another angle they consider is the relationship between thought speed/variability i.e the form of thought and the contents of thought (whether having emotional salience or neutral) and investigated whether the effects of speed and variability were confounded with though content; they found negative evidence for this inetrcationist view.

Let me also clarify that I differ slightly (based on my interpreation of their data) from their original hypothesis, in the sense that I believe that their data shows that speed affects felt energy and variability affects affect and that the effects of speed on mood may be mediated by the effect of speed on felt energy and similarly the effect of variability on felt energy may be mediated by its effects on mood.

Thus my claim is that:

  1. Thought speed leads to more felt energy. Extremes of ‘racing thoughts’ leads to the manic feeling of being very energetic (when accompanied with positive mood, this may give rise to feelings of grandiosity- I have the energy to achieve anything), while also may lead to anxiety states (when accompanied with negative affect) in which one cannot really suppress a negative chain of thoughts – one following the other in fast succession, regarding the object of ones anxiety. The counterpart to this the state where thoughts come slowly (writer’s block etc) and when accompanied with negative affect, this can easily be viewed as depression.
  2. Thought variability leads to more positive affect: Extremes of ‘tangential thoughts’ leads to the manic feeling of being in a good mood (when accompanied with high energy , this manifest as feelings of euphoria); while the same tangential thoughts when accompanied by low felt energy may actually be felt as serenity/ calmness/ reverie. The counterpart to this is the state of thoughts that are stuck in a rut – when accompanied with low energy this leads to feelings of depression and sadness.

Thus, to put simply : there are two dimensions one needs to take care of – mood (thought variability) x energy (thought speed) and high and low extremes on these dimensions are all opposites of their counterpart.

Before we move on, I’ll let the authors present their other two claims too:

3. The combination principle. Fast, varied thinking prompts elation; slow, repetitive thinking prompts dejection. When speed and variability oppose each other, such that one is low and the other high, individuals’ affective experience will depend on factors including which one of the two factors is more extreme. The psychological state elicited by such combinations can vary apart from its valence, as shown in Figure 1. For example, repetitive thinking can elicit feelings of anxiety rather than depression if that repetitive thinking is rapid. Notably, anxious states generally are more energetic than depressive states. Moreover, just as fast-moving physical objects possess more energy than do identical slower objects, fast thinking involves more energy (e.g., greater wakefulness, arousal, and feelings of energy) than does slow thinking.

4. The content independence principle. Effects of thought speed and variability are independent of the specific nature of thought content. Powerful affective states such as depression and anxiety have been traced to irrational and dysfunctional cognitions (e.g., Beck, 1976). According to the independence principle, effects of mental motion on mood do not require any particular type of thought content.

They review a number of factors and studies that all point to a causal link between thought speed and energy and between thought variability and mood. More importantly they show the independent effects of though speed and variability from the effects of thought content on mood. I’ll not go into the details of the studies and experiments they performed, as their article is available freely online and one can read for oneself (it makes for excellent reading); suffice it to say that I believe they are on the right track and have evidence to back their claims.

What are the implications of this:

The speed and repetition of thoughts, we suggest, could be manipulated in order to alter and alleviate some of the mood and energy symptoms of mental disorders. The slow and repetitive aspects of depressive thinking, for example, seem to contribute to the disorder’s affective symptoms (e.g., Ianzito et al., 1974; Judd et al., 1994; Nolen-Hoeksema, 1991; Philipp et al., 1991; Segerstrom et al., 2000). Thus, techniques that are effective in speeding cognition and in breaking the cycle of repetitive thought may be useful in improving the mood and energy levels of depressed patients. The potential of this sort of treatment is suggested by Pronin and Wegner’s (2006) study, in which speeding participants’ cognitions led to improved mood and energy, even when those cognitions were negative, self-referential, and decidedly depressing. It also is suggested by Gortner et al.’s (2006) finding that an expressive writing manipulation that decreased rumination (even while inducing thoughts about an upsetting experience) rendered recurrent depression less likely.

There also is some evidence suggesting that speeding up even low-level cognition may improve mood in clinically depressed patients. In one experiment, Teasdale and Rezin (1978) instructed depressed participants to repeat aloud one of four letters of the alphabet (A, B, C, or D) presented in random order every 1, 2, or 4 s. They found that those participants required to repeat the letters at the fastest rate experienced the most reduction in depressed mood. Similar techniques could be tested for the treatment of other mental illnesses. For example, manipulations might be designed to decrease the mental motion of manic patients, perhaps by introducing repetitive and slow cognitive stimuli. Or, in the case of anxiety disorders, it would be worthwhile to test interventions aimed at inducing slow and varied thought (as opposed to the fast and repetitive thought characteristic of anxiety). The potential effectiveness of such interventions is supported by the fact that mindfulness meditation, which involves slow but varied thinking, can lessen anxiety, stress, and arousal.

 hat tip: Ulterior Motives
Pronin, E., & Jacobs, E. (2008). Thought Speed, Mood, and the Experience of Mental Motion Perspectives on Psychological Science, 3 (6), 461-485 DOI: 10.1111/j.1745-6924.2008.00091.x
Pronin, E., & Wegner, D. (2006). Manic Thinking: Independent Effects of Thought Speed and Thought Content on Mood Psychological Science, 17 (9), 807-813 DOI: 10.1111/j.1467-9280.2006.01786.x

Best of Tweets: 27-05-09

Here goes:

  1. Fast, happy, and impulsive I: Speed makes you happy 
  2. Bad drives reactions, Good propels behaviors 
  3. even ‘classical’ radioactivity is random RT @Wildcat2030: Free Will And Quantum Physics: Less Related Than You Think – 
  4. co-operation as ‘another’ feature/ guiding principle of evolution RT @XiXiDu: The Key to Success?
  5. RT @mariapage: RT @news_science: Psychologists find that head movement is more important than … #LinkTweet
  6. the improv. nature of web2.0 RT @Wildcat2030: new essay “Wildcat: Jazzing the Beast” The web cultural revolution
  7. RT @BoraZ: @carlzimmer: .3quarksdaily’s new prize for science blogs. Submit url of your favorite blog post:
  8. Encephalon #71: Big Night
  9.   The Universal Language of Bird Song – Very Short List  
  10. Welcome to the Stream: The Next Phase of the Web | Twine
  11. RT @anibalmastobiza: RT: @DoctorZhivago Why We Stare, Even When We Don’t Want to:
  12. RT @Wildcat2030: “In search of the black swans” Mark Buchanan comments on marginal revolutionary ideas in science   

Major conscious and unconscious processes in the brain: part 5: Physical substrates of A-cosnciousness

This is the fifth post in my ongoing series on major conscious and unconscious processes in the brain. For earlier parts, click here.

Today , I would like to point to  a few physical models and theories of consciousness that have been proposed that show that consciousness still resides in the brain, although the neural/ supportive processes may be more esoteric. 

I should forewarn before hand that all the theories involve advanced understanding of brains/ physics/ biochemistry etc and that I do not feel qualified enough to understand/ explain all the different theories in their entirety (or even have a surface understanding of them) ; yet , I believe that there are important underlying patterns and that applying the eight stage model to these approaches will only help us further understand and predict and search in the right directions. The style of this post is similar to the part 3 post on robot minds that delineated the different physical approaches that are used to implement intelligence/ brains in machines.

With that as a background, let us look at the major theoretical approaches to locate consciousness and define its underlying substrates. I could find six different physical hypothesis about consciousness on the Wikipedia page:

  1. * Orch-OR theory
  2. * Electromagnetic theories of consciousness
  3. * Holonomic brain theory
  4. * Quantum mind
  5. * Space-time theories of consciousness
  6. * Simulated Reality

Now let me briefly introduce each of the theories and where they seem to have been most successful; again I believe that though this time visually-normal people are perceiving the elephant, yet they are hooked on to its different aspects and need to bind their perspectives together to arrive at the real nature of the elephant.

1. Orch-OR theory:

The Orch OR theory combines Penrose’s hypothesis with respect to the Gödel theorem with Hameroff’s hypothesis with respect to microtubules. Together, Penrose and Hameroff have proposed that when condensates in the brain undergo an objective reduction of their wave function, that collapse connects to non-computational decision taking/experience embedded in the geometry of fundamental spacetime.
The theory further proposes that the microtubules both influence and are influenced by the conventional activity at the synapses between neurons. The Orch in Orch OR stands for orchestrated to give the full name of the theory Orchestrated Objective Reduction. Orchestration refers to the hypothetical process by which connective proteins, known as microtubule associated proteins (MAPs) influence or orchestrate the quantum processing of the microtubules.
Hameroff has proposed that condensates in microtubules in one neuron can link with other neurons via gap junctions[6]. In addition to the synaptic connections between brain cells, gap junctions are a different category of connections, where the gap between the cells is sufficiently small for quantum objects to cross it by means of a process known as quantum tunnelling. Hameroff proposes that this tunnelling allows a quantum object, such as the Bose-Einstein condensates mentioned above, to cross into other neurons, and thus extend across a large area of the brain as a single quantum object.
He further postulates that the action of this large-scale quantum feature is the source of the gamma (40 Hz) synchronisation observed in the brain, and sometimes viewed as a correlate of consciousness [7]. In support of the much more limited theory that gap junctions are related to the gamma oscillation, Hameroff quotes a number of studies from recent year.
From the point of view of consciousness theory, an essential feature of Penrose’s objective reduction is that the choice of states when objective reduction occurs is selected neither randomly, as are choices following measurement or decoherence, nor completely algorithmically. Rather, states are proposed to be selected by a ‘non-computable’ influence embedded in the fundamental level of spacetime geometry at the Planck scale.
Penrose claimed that such information is Platonic, representing pure mathematical truth, aesthetic and ethical values. More than two thousand years ago, the Greek philosopher Plato had proposed such pure values and forms, but in an abstract realm. Penrose placed the Platonic realm at the Planck scale. This relates to Penrose’s ideas concerning the three worlds: physical, mental, and the Platonic mathematical world. In his theory, the physical world can be seen as the external reality, the mental world as information processing in the brain and the Platonic world as the encryption, measurement, or geometry of fundamental spacetime that is claimed to support non-computational understanding.

To me it seems that Orch OR theory is more suitable for forming platonic representations of objects – that is invariant/ideal perception of an object. This I would relate to the Perceptual aspect of A-consciousness.

2. Electromagnetic theories of consciousness

The electromagnetic field theory of consciousness is a theory that says the electromagnetic field generated by the brain (measurable by ECoG) is the actual carrier of conscious experience.
The starting point for these theories is the fact that every time a neuron fires to generate an action potential and a postsynaptic potential in the next neuron down the line, it also generates a disturbance to the surrounding electromagnetic (EM) field. Information coded in neuron firing patterns is therefore reflected into the brain’s EM field. Locating consciousness in the brain’s EM field, rather than the neurons, has the advantage of neatly accounting for how information located in millions of neurons scattered throughout the brain can be unified into a single conscious experience (sometimes called the binding problem): the information is unified in the EM field. In this way EM field consciousness can be considered to be ‘joined-up information’.
However their generation by synchronous firing is not the only important characteristic of conscious electromagnetic fields — in Pockett’s original theory, spatial pattern is the defining feature of a conscious (as opposed to a non-conscious) field.
In McFadden’s cemi field theory, the brain’s global EM field modifies the electric charges across neural membranes and thereby influences the probability that particular neurons will fire, providing a feed-back loop that drives free will.

To me, the EM filed theories seem to be right on track regarding the fact that the EM filed itself may modify / affect the probabilities of firing of individual neurons and thus lead to free will or sense of agency by in some sense causing some neurons to fire over others. I believe we can model the agency aspect of A-consciousness and find neural substrates of the same in brain, using this approach.

3. Holonomic brain theory:

The holonomic brain theory, originated by psychologist Karl Pribram and initially developed in collaboration with physicist David Bohm, is a model for human cognition that is drastically different from conventionally accepted ideas: Pribram and Bohm posit a model of cognitive function as being guided by a matrix of neurological wave interference patterns situated temporally between holographic Gestalt perception and discrete, affective, quantum vectors derived from reward anticipation potentials.
Pribram was originally struck by the similarity of the hologram idea and Bohm’s idea of the implicate order in physics, and contacted him for collaboration. In particular, the fact that information about an image point is distributed throughout the hologram, such that each piece of the hologram contains some information about the entire image, seemed suggestive to Pribram about how the brain could encode memories.
According to Pribram, the tuning of wave frequency in cells of the primary visual cortex plays a role in visual imaging, while such tuning in the auditory system has been well established for decades[citation needed]. Pribram and colleagues also assert that similar tuning occurs in the somatosensory cortex.
Pribram distinguishes between propagative nerve impulses on the one hand, and slow potentials (hyperpolarizations, steep polarizations) that are essentially static. At this temporal interface, he indicates, the wave interferences form holographic patterns.

To me, the holnomic approach seems to be the phenomenon lying between gestalt perception and quantum vectors derived from reward-anticipation potentials or in simple English between the perception and agency components of A-consciousness. this is the Memory aspect of A-consciousness. The use of hologram used to store information as a model, the use of slow waves that are tuned to carry information, the use of this model to explain memory formation (including hyperpolarization etc) all point to the fact that this approach will be most successful in explaining the autobiographical memory that is assited wuith A-cosnciousness.

4. Quantum Mind:

The quantum mind hypothesis proposes that classical mechanics cannot fully explain consciousness and suggests that quantum mechanical phenomena such as quantum entanglement and superposition may play an important part in the brain’s function and could form the basis of an explanation of consciousness.
Recent papers by physicist, Gustav Bernroider, have indicated that he thinks that Bohm’s implicate-explicate structure can account for the relationship between neural processes and consciousness[7]. In a paper published in 2005 Bernroider elaborated his proposals for the physical basis of this process[8]. The main thrust of his paper was the argument that quantum coherence may be sustained in ion channels for long enough to be relevant for neural processes and the channels could be entangled with surrounding lipids and proteins and with other channels in the same membrane. Ion channels regulate the electrical potential across the axon membrane and thus play a central role in the brain’s information processing.
Bernroider uses this recently revealed structure to speculate about the possibility of quantum coherence in the ion channels. Bernroider and co-author Sisir Roy’s calculations suggested to them that the behaviour of the ions in the K channel could only be understood at the quantum level. Taking this as their starting point, they then ask whether the structure of the ion channel can be related to logic states. Further calculations lead them to suggest that the K+ ions and the oxygen atoms of the binding pockets are two quantum-entangled sub-systems, which they then equate to a quantum computational mapping. The ions that are destined to be expelled from the channel are proposed to encode information about the state of the oxygen atoms. It is further proposed the separate ion channels could be quantum entangled with one another.

To me, the quantum entanglement (or bond between different phenomenons)and the encoding of information about the state of the system in that entanglement seems all too similar to feelings as information about the emotional/bodily state. Thus, I propose that these quantum entanglements in these ion-channels may be the substrate that give rise to access to the state of the system, thus giving rise to feelings that is the feeling component of A-consciousness i.e access to one’s own emotional states.

5. Space-time theories of consciousness:

Space-time theories of consciousness have been advanced by Arthur Eddington, John Smythies and other scientists. The concept was also mentioned by Hermann Weyl who wrote that reality is a “…four-dimensional continuum which is neither ‘time’ nor ‘space’. Only the consciousness that passes on in one portion of this world experiences the detached piece which comes to meet it and passes behind it, as history, that is, as a process that is going forward in time and takes place in space”.
In 1953, CD Broad, in common with most authors in this field, proposed that there are two types of time, imaginary time measured in imaginary units (i) and real time measured on the real plane.
It can be seen that for any separation in 3D space there is a time at which the separation in 4D spacetime is zero. Similarly, if another coordinate axis is introduced called ‘real time’ that changes with imaginary time then historical events can also be no distance from a point. The combination of these result in the possibility of brain activity being at a point as well as being distributed in 3D space and time. This might allow the conscious individual to observe things, including whole movements, as if viewing them from a point.
Alex Green has developed an empirical theory of phenomenal consciousness that proposes that conscious experience can be described as a five-dimensional manifold. As in Broad’s hypothesis, space-time can contain vectors of zero length between two points in space and time because of an imaginary time coordinate. A 3D volume of brain activity over a short period of time would have the time extended geometric form of a conscious observation in 5D. Green considers imaginary time to be incompatible with the modern physical description of the world, and proposes that the imaginary time coordinate is a property of the observer and unobserved things (things governed by quantum mechanics), whereas the real time of general relativity is a property of observed things.
These space-time theories of consciousness are highly speculative but have features that their proponents consider attractive: every individual would be unique because they are a space-time path rather than an instantaneous object (i.e., the theories are non-fungible), and also because consciousness is a material thing so direct supervenience would apply. The possibility that conscious experience occupies a short period of time (the specious present) would mean that it can include movements and short words; these would not seem to be possible in a presentist interpretation of experience.
Theories of this type are also suggested by cosmology. The Wheeler-De Witt equation describes the quantum wave function of the universe (or more correctly, the multiverse).

To me, the space-time theories of consciousness that lead to observation/consciousness from a point in the 4d/5d space-time continuum seem to mirror the identity formation function of stage 5.This I relate to evaluation /deliberation aspect of A-consciousness.

6. Simulated Reality

In theoretical physics, digital physics holds the basic premise that the entire history of our universe is computable in some sense. The hypothesis was pioneered in Konrad Zuse’s book Rechnender Raum (translated by MIT into English as Calculating Space, 1970), which focuses on cellular automata. Juergen Schmidhuber suggested that the universe could be a Turing machine, because there is a very short program that outputs all possible programmes in an asymptotically optimal way. Other proponents include Edward Fredkin, Stephen Wolfram, and Nobel laureate Gerard ‘t Hooft. They hold that the apparently probabilistic nature of quantum physics is not incompatible with the notion of computability. A quantum version of digital physics has recently been proposed by Seth Lloyd. None of these suggestions has been developed into a workable physical theory.
It can be argued that the use of continua in physics constitutes a possible argument against the simulation of a physical universe. Removing the real numbers and uncountable infinities from physics would counter some of the objections noted above, and at least make computer simulation a possibility. However, digital physics must overcome these objections. For instance, cellular automata would appear to be a poor model for the non-locality of quantum mechanics.

To me the simulation argument is one model of us and the world- i.e we are living in a dream state/ simulation/ digital world where everything is synthetic/ predictable and computable. The alternative view of world as real, analog, continuous world where everything is creative / unpredictable and non-computable. One can , and should have both the models in mind – a simulated reality that is the world and a simulator that is oneself. Jagat mithya, brahma sach. World (simulation) is false, Brahma (creation) is true . Ability to see the world as both a fiction and a reality at the same time, as a fore laid stage and as a creative jazz at the same time leads to this sixth stage of consciousness the A-consciousness of an emergent conscious self that is distinct from mere body/brain. One can see oneself and others as actors acting as per their roles on the world’s stage; or as agents co-creating the reality.

That should be enough for today, but I am sure my astute readers will take this a notch further and propose two more theoretical approaches to consciousness and perhaps look for their neural substrates basde on teh remianing tow stages and componenets of A-consciousness..

Major conscious and unconscious processes in the brain: part 4: the easy problem of A-consciousness

This is the part 4 of the multipart series on conscious and unconscious processes in the brain.

I’ll like to start with a quote from the Mundaka Upanishads:

Two birds, inseparable friends, cling to the same tree. One of them eats the sweet fruit, the other looks on without eating.

On the same tree man sits grieving, immersed, bewildered, by his own impotence. But when he sees the other lord contented and knows his glory, then his grief passes away.

Today I plan to delineate the major conscious processes in the brain, without bothering with their neural correlates or how they are related to unconscious processes that I have delineated earlier. Also I’ll be restricting the discussion mostly to the easy problem of Access or A- consciousness.  leaving the hard problem of phenomenal or P-consciousness for later.

I’ll first like to quote a definition of consciousness form Baars:

The contents of consciousness include the immediate perceptual world; inner speech and visual imagery; the fleeting present and its fading traces in immediate memory; bodily feelings like pleasure, pain, and excitement; surges of feeling; autobiographical events when they are remembered; clear and immediate intentions, expectations and actions; explicit beliefs about oneself and the world; and concepts that are abstract but focal. In spite of decades of behaviouristic avoidance, few would quarrel with this list today.

Next I would like to list the subsystems identified by Charles T tart that are involved in consciousness:

  • EXTEROCEPTION (sensing the external world)
  • INTEROCEPTION (sensing the body)
  • INPUT-PROCESSING (seeing meaningful stimuli)

With this background, let me delineate the major conscious processes/ systems that make up the A-consciousness as per me:-

  1. Perceptual system: Once the spotlight of attention is available, it can be used to bring into focus the unconscious input representations that the brain is creating.  Thus a system may evolve that has access to information regarding the sensations that are being processed or in other words that perceives and is conscious of what is being sensed. To perceive is to have access to ones sensations.  In Tarts model , it is the input-processing module  that ‘sees’ meaningful stimuli and ignores the rest / hides them from second-order representation. This is Baars immediate perceptual world.
  2. Agency system: The spotlight of attention can also bring into foreground the unconscious urges that propel movement. This access to information regarding how and why we move gives rise to the emergence of A-consciousness of will/ volition/agency. To will is to have access to ones action-causes. In tarts model , it is the motor output module that enables sense of voluntary movement. In Baars definition it is clear and immediate intentions, expectations and actions.
  3. Memory system:  The spotlight of attention may also bring into focus past learning. This access to information regarding past unconscious learning gives rise to A-consciousness of remembering/ recognizing. To remember is to have access to past learning. The Tart subsystem for the same is Memory and Baars definition is autobiographical events when they are remembered. 
  4. Feeling (emotional/ mood) system: The spotlight of attention may also highlight the emotional state of the organism. An information about one’s own emotional state gives rise to the A-consciousness of feelings that have an emotional tone/ mood associated. To feel is to have access to ones emotional state. The emotions system of Tart and Baars bodily feelings like pleasure, pain, and excitement; surges of feeling relate to this.
  5. Deliberation/ reasoning/thought system: The spotlight of attention may also highlight the decisional and evaluative unconscious processes that the organism indulges in. An information about which values guided decision can lead to a reasoning module that justifies the decisions and an A-consciousness of introspection. To think is to have access to ones own deliberation and evaluative process. Tarts evaluative and decision making module is for the same. Baars definition may be enhanced to include intorspection i.e access to thoughts and thinking (remember Descartes dictum of I think therefore I am. ) as part of consciousness.
  6. Modeling system that can differentiate and perceive dualism: The spotlight of attention may highlight the dual properties of the world (deterministic and chaotic ). An information regarding the fact that two contradictory models of the world can both be true at the same time, leads to modeling of oneslf that is different from the world giving rise to the difference between ‘this’ and ‘that’ and giving rise to the sense of self. One models both the self and the world based on principles/ subsystems of extereocpetion and interoception and this give rise to A-consciousness of beliefs about the self and the world. To believe is to have access to one’s model of something. One has access to a self/ subjectivity different from world and defined by interoceptive senses ; and a world/ reality different from self defined by exterioceptive senses. The interocpetive and exteroceptive subsystems of  Tart and Baars  explicit beliefs about oneself and the world are relevant here. This system give rise to the concept of a subjective person or self.
  7. Language system that can report on subjective contents and propositions. The spotlight of awareness may  verbalize the unconscious communicative intents and propositions giving rise to access to inner speech and enabling overt language and reporting capabilities. To verbally report is to have access to the underlying narrative that one wants to communicate and that one is creating/confabulating. This narrative and story-telling capability should also in my view lead to the A-consciousness of the stream of consciousness. This would be implemented most probably by Tart’s unconscious and space/time sense modules and relates to Baars the fleeting present and its fading traces in immediate memory- a sense of an ongoing stream of consciousness. To have a stream of consciousness is to have access to one’s inner narrative.
  8. Awareness system that can bring into focal awareness the different conscious process that are seen as  coherent. : the spotlight of attention can also be turned upon itself- an information about what all processes make a coherent whole and are thus being attended and amplified gives rise to a sense of self-identity that is stable across time and  unified in space. To be aware is to have access to what one is attending or focusing on or is ‘conscious’ of. Tarts Sense of identity subsystem and Baars concepts that are abstract but focal relate to this. Once available the spotlight of awareness opens the floodgates of phenomenal or P-consciousness or experience in the here-and-now of qualia that are invariant and experiential in  nature. That ‘feeling of what it means to be’ of course is the subject matter for another day and another post!

Major conscious and unconcoscious processes in the brain: part 3: Robot minds

This article continues my series on major conscious and unconscious processes in the brain. In my last two posts I have talked about 8 major unconscious processes in the brain viz sensory, motor, learning , affective, cognitive (deliberative), modelling, communications and attentive systems. Today, I will not talk about brain in particular, but will approach the problem from a slightly different problem domain- that of modelling/implementing an artificial brain/ mind.

I am a computer scientist, so am vaguely aware of the varied approaches used to model/implement the brain. Many of these use computers , though not every approach assumes that the brain is a computer.

Before continuing I would briefly like to digress and link to one of my earlier posts regarding the different  traditions of psychological research in personality and how I think they fit an evolutionary stage model . That may serve as a background to the type of sweeping analysis and genralisation that I am going to do. To be fair it is also important to recall an Indian parable of how when asked to describe an elephant by a few blind man each described what he could lay his hands on and thus provided a partial and incorrect picture of the elephant. Some one who grabbed the tail, described it as snake-like and so forth.

With that in mind let us look at the major approaches to modelling/mplementing the brain/intelligence/mind. Also remember that I am most interested in unconscious brain processes till now and sincerely believe that all the unconscious processes can, and will be successfully implemented in machines.   I do not believe machines will become sentient (at least any time soon), but that question is for another day.

So, with due thanks to @wildcat2030, I came across this book today and could immediately see how the different major approaches to artificial robot brains are heavily influenced (and follow) the evolutionary first five stages and the first five unconscious processes in the brain.
The book in question is ‘Robot Brains: Circuits and Systems for Conscious Machines’ by Pentti O. Haikonen and although he is most interested in conscious machines I will restrict myself to intelligent but unconscious machines/robots.

The first chapter of the book (which has made to my reading list) is available at Wiley site in its entirety and I quote extensively from there:

Presently there are five main approaches to the modelling of cognition that could be used for the development of cognitive machines: the computational approach (artificial intelligence, AI), the artificial neural networks approach, the dynamical systems approach, the quantum approach and the cognitive approach. Neurobiological approaches exist, but these may be better suited for the eventual explanation of the workings of the biological brain.

The computational approach (also known as artificial intelligence, AI) towards thinking machines was initially worded by Turing (1950). A machine would be thinking if the results of the computation were indistinguishable from the results of human thinking. Later on Newell and Simon (1976) presented their Physical Symbol System Hypothesis, which maintained that general intelligent action can be achieved by a physical symbol system and that this system has all the necessary and sufficient means for this purpose. A physical symbol system was here the computer that operates with symbols (binary words) and attached rules that stipulate which symbols are to follow others. Newell and Simon believed that the computer would be able to reproduce human-like general intelligence, a feat that still remains to be seen. However, they realized that this hypothesis was only an empirical generalization and not a theorem that could be formally proven. Very little in the way of empirical proof for this hypothesis exists even today and in the 1970s the situation was not better. Therefore Newell and Simon pretended to see other kinds of proof that were in those days readily available. They proposed that the principal body of evidence for the symbol system hypothesis was negative evidence, namely the absence of specific competing hypotheses; how else could intelligent activity be accomplished by man or machine? However, the absence of evidence is by no means any evidence of absence. This kind of ‘proof by ignorance’ is too often available in large quantities, yet it is not a logically valid argument. Nevertheless, this issue has not yet been formally settled in one way or another. Today’s positive evidence is that it is possible to create world-class chess-playing programs and these can be called ‘artificial intelligence’. The negative evidence is that it appears to be next to impossible to create real general intelligence via preprogrammed commands and computations.

The original computational approach can be criticized for the lack of a cognitive foundation. Some recent approaches have tried to remedy this and consider systems that integrate the processes of perception, reaction, deliberation and reasoning (Franklin, 1995, 2003; Sloman, 2000). There is another argument against the computational view of the brain. It is known that the human brain is slow, yet it is possible to learn to play tennis and other activities that require instant responses. Computations take time. Tennis playing and the like would call for the fastest computers in existence. How could the slow brain manage this if it were to execute computations?

The artificial neural networks approach, also known as connectionism, had its beginnings in the early 1940s when McCulloch and Pitts (1943) proposed that the brain cells, neurons, could be modelled by a simple electronic circuit. This circuit would receive a number of signals, multiply their intensities by the so-called synaptic weight values and sum these modified values together. The circuit would give an output signal if the sum value exceeded a given threshold. It was realized that these artificial neurons could learn and execute basic logic operations if their synaptic weight values were adjusted properly. If these artificial neurons were realized as hardware circuits then no programs would be necessary and biologically plausible artificial replicas of the brain might be possible. Also, neural networks operate in parallel, doing many things simultaneously. Thus the overall operational speed could be fast even if the individual neurons were slow. However, problems with artificial neural learning led to complicated statistical learning algorithms, ones that could best be implemented as computer programs. Many of today’s artificial neural networks are statistical pattern recognition and classification circuits. Therefore they are rather removed from their original biologically inspired idea. Cognition is not mere classification and the human brain is hardly a computer that executes complicated synaptic weight-adjusting algorithms.

The human brain has some 10 to the power of 11 neurons and each neuron may have tens of thousands of synaptic inputs and input weights. Many artificial neural networks learn by tweaking the synaptic weight values against each other when thousands of training examples are presented. Where in the brain would reside the computing process that would execute synaptic weight adjusting algorithms? Where would these algorithms have come from? The evolutionary feasibility of these kinds of algorithms can be seriously doubted. Complicated algorithms do not evolve via trial and error either. Moreover, humans are able to learn with a few examples only, instead of having training sessions with thousands or hundreds of thousands of examples. It is obvious that the mainstream neural networks approach is not a very plausible candidate for machine cognition although the human brain is a neural network.

Dynamical systems were proposed as a model for cognition by Ashby (1952) already in the 1950s and have been developed further by contemporary researchers (for example Thelen and Smith, 1994; Gelder, 1998, 1999; Port, 2000; Wallace, 2005). According to this approach the brain is considered as a complex system with dynamical interactions with its environment. Gelder and Port (1995) define a dynamical system as a set of quantitative variables, which change simultaneously and interdependently over quantitative time in accordance with some set of equations. Obviously the brain is indeed a large system of neuron activity variables that change over time. Accordingly the brain can be modelled as a dynamical system if the neuron activity can be quantified and if a suitable set of, say, differential equations can be formulated. The dynamical hypothesis sees the brain as comparable to analog feedback control systems with continuous parameter values. No inner representations are assumed or even accepted. However, the dynamical systems approach seems to have problems in explaining phenomena like ‘inner speech’. A would-be designer of an artificial brain would find it difficult to see what kind of system dynamics would be necessary for a specific linguistically expressed thought. The dynamical systems approach has been criticized, for instance by Eliasmith (1996, 1997), who argues that the low dimensional systems of differential equations, which must rely on collective parameters, do not model cognition easily and the dynamicists have a difficult time keeping arbitrariness from permeating their models. Eliasmith laments that there seems to be no clear ways of justifying parameter settings, choosing equations, interpreting data or creating system boundaries. Furthermore, the collective parameter models make the interpretation of the dynamic system’s behaviour difficult, as it is not easy to see or determine the meaning of any particular parameter in the model. Obviously these issues would translate into engineering problems for a designer of dynamical systems.

The quantum approach maintains that the brain is ultimately governed by quantum processes, which execute nonalgorithmic computations or act as a mediator between the brain and an assumed more-or-less immaterial ‘self’ or even ‘conscious energy field’ (for example Herbert, 1993; Hameroff, 1994; Penrose, 1989; Eccles, 1994). The quantum approach is supposed to solve problems like the apparently nonalgorithmic nature of thought, free will, the coherence of conscious experience, telepathy, telekinesis, the immortality of the soul and others. From an engineering point of view even the most practical propositions of the quantum approach are presently highly impractical in terms of actual implementation. Then there are some proposals that are hardly distinguishable from wishful fabrications of fairy tales. Here the quantum approach is not pursued.

The cognitive approach maintains that conscious machines can be built because one example already exists, namely the human brain. Therefore a cognitive machine should emulate the cognitive processes of the brain and mind, instead of merely trying to reproduce the results of the thinking processes. Accordingly the results of neurosciences and cognitive psychology should be evaluated and implemented in the design if deemed essential. However, this approach does not necessarily involve the simulation or emulation of the biological neuron as such, instead, what is to be produced is the abstracted information processing function of the neuron.

A cognitive machine would be an embodied physical entity that would interact with the environment. Cognitive robots would be obvious applications of machine cognition and there have been some early attempts towards that direction. Holland seeks to provide robots with some kind of consciousness via internal models (Holland and Goodman, 2003; Holland, 2004). Kawamura has been developing a cognitive robot with a sense of self (Kawamura, 2005; Kawamura et al., 2005). There are also others. Grand presents an experimentalist’s approach towards cognitive robots in his book (Grand, 2003).

A cognitive machine would be a complete system with processes like perception, attention, inner speech, imagination, emotions as well as pain and pleasure. Various technical approaches can be envisioned, namely indirect ones with programs, hybrid systems that combine programs and neural networks, and direct ones that are based on dedicated neural cognitive architectures. The operation of these dedicated neural cognitive architectures would combine neural, symbolic and dynamic elements.

However, the neural elements here would not be those of the traditional neural networks; no statistical learning with thousands of examples would be implied, no backpropagation or other weight-adjusting algorithms are used. Instead the networks would be associative in a way that allows the symbolic use of the neural signal arrays (vectors). The ‘symbolic’ here does not refer to the meaning-free symbol manipulation system of AI; instead it refers to the human way of using symbols with meanings. It is assumed that these cognitive machines would eventually be conscious, or at least they would reproduce most of the folk psychology hallmarks of consciousness (Haikonen, 2003a, 2005a). The engineering aspects of the direct cognitive approach are pursued in this book.

Now to me these computational approaches are all unidimensional-

  1. The computational approach is suited for symbol-manipulation and information-represntation and might give good results when used in systems that have mostly ‘sensory’ features like forming a mental represntation of external world, a chess game etc. Here something (stimuli from world) is represented as something else (an internal symbolic represntation).
  2. The Dynamical Systems approach is guided by interactions with the environment and the principles of feedback control systems and also is prone to ‘arbitrariness’ or ‘randomness’. It is perfectly suited to implement the ‘motor system‘ of brain as one of the common features is apparent unpredictability (volition) despite being deterministic (chaos theory) .
  3. The Neural networks or connectionsim is well suited for implementing the ‘learning system’ of the brain and we can very well see that the best neural network based systems are those that can categorize and classify things just like ‘the learning system’ of the brain does.
  4. The quantum approach to brain, I haven’t studied enough to comment on, but the action-tendencies of ‘affective system’ seem all too similar to the superimposed,simultaneous states that exits in a wave function before it is collapsed. Being in an affective state just means having a set of many possible related and relevant actions simultaneously activated and then perhaps one of that decided upon somehow and actualized. I’m sure that if we could ever model emotion in machine sit would have to use quantum principles of wave functions, entanglemnets etc.
  5. The cognitive approach, again I haven’t go a hang of yet, but it seems that the proposal is to build some design into the machine that is based on actual brain and mind implemntations. Embodiment seems important and so does emulating the information processing functions of neurons. I would stick my neck out and predict that whatever this cognitive approach is it should be best able to model the reasoning and evaluative and decision-making functions of the brain. I am reminded of the computational modelling methods, used to functionally decompose a cognitive process, and are used in cognitive science (whether symbolic or subsymbolic modelling) which again aid in decision making / reasoning (see wikipedia entry)

Overall, I would say there is room for further improvement in the way we build more intelligent machines. They could be made such that they have two models of world – one deterministic , another chaotic and use the two models simulatenously (sixth stage of modelling); then they could communicate with other machines and thus learn language (some simulation methods for language abilities do involve agents communicating with each other using arbitrary tokens and later a language developing) (seventh stage) and then they could be implemented such that they have a spotlight of attention (eighth stage) whereby some coherent systems are amplified and others suppressed. Of course all this is easier said than done, we will need at least three more major approaches to modelling and implementing brain/intelligence before we can model every major unconscious process in the brain. To model consciousness and program sentience is an uphill task from there and would definitely require a leap in our understandings/ capabilities.

Do tell me if you find the above reasonable and do believe that these major approaches to artificial brain implementation are guided and constrained by the major unconscious processes in the brain and that we can learn much about brain from the study of these artificial approaches and vice versa.

sandygautam's RSS Feed
Go to Top