Kahneman in his book ‘Thinking fast and slow‘ elucidates the two type of thinking processes involved- a system I consisting of fast, intuitive processing, and a system II consisting of slower, more deliberate processing. Lesser known is the fact that a similar dual process theory of personality that precedes his work is by Seymour Epstien.
Epstien is know for his Cognitive-Experiential Self theory of Personalty (CEST), according to which he reintroduced the concept of unconscious in psychology, in the form of the Experiential system, but his unconscious was not maladaptive and instinct driven, but more adaptive in nature.
Essentially, Epstien acknowledges the massive role Experiential system has on the rational system, postulating that most of the behavior is Experiential driven and only pots hoc rationalized by the Rational system.
The Experiential system, though unconscious is not made up of repressed desires or works on the Pleasure principle, but instead is geared towards satisfying four basic needs. He later added two super-ordinate needs – one related to Valence or positive affcet- negative affect polarity and the other related to Arousal. Its pertinent to note that the Experiential system of CEST is very much affect driven and ‘hot’ rather than ‘cold’ in nature.
Essentially, Epstien himself tacitly split the four needs into eight by claiming that each need can be split around the super-ordinate need of positive affect- negative affect polarity. Here are the four basic needs made explicit.
In classical Freudian theory, the one most basic need before his introduction of a death instinct was the pleasure principle, which refers to the pursuit of pleasure and the avoidance of and pain (Freud, 1924/1960). Some learning theorists such as Thorndike (1927) make a similar assumption in their view of the importance of affective reinforcement. For object-relations theorists, most notably Bowlby (1988), the most fundamental need is the need for relatedness. For Rogers (1951) and other phenomenological psychologists, it is the need to maintain the stability and coherence of a person’s conceptual system. For Allport (1961) and Kohut (1971), it is the need to enhance self-esteem. (For a more thorough discussion of these proposals see Epstein, 1993, 1998b.) From the perspective of CEST, the four proposed basic needs all meet the following criteria for a basic need: the need is universal, the need can dominate the other basic needs, a failure to fulfill the need can destabilize the overall conceptual system.
These four basic needs may be satisfied to various degrees during critical developmental periods and lead to four basic types of beliefs. Even a scale has been created to measure these basic beliefs:
The Basic Beliefs Inventory (BBI). The BBI (Catlin & Epstein, 1992) is a 102-item measure of beliefs associated with the satisfaction of four basic needs that motivate behavior according to CEST (see Epstein, 1991). The four basic beliefs are (1) the belief that the world is benign versus malevolent; (2) the belief that the world is meaningful (i.e., predictable, controllable, and just) versus chaotic (i.e., unpredictable, uncontrollable, and unjust); (3) the belief that relations with others are supportive versus threatening; and (4) the belief that the self is worthy (i.e., competent, good, and lovable) versus unworthy (i.e., incompetent, bad, and unlovable).
To me this aligns very well with the fundamental four model. To recap , as per the fundamental four model there are four polarities of basic motivations or drives: pleasure/ pain; active/passive; self/other and broad/narrow.
I would like to take this opportunity to expand the CEST and merge it with the fundamental four framework.
As per CEST, we all have beliefs or schema or models about self, others, the inanimate world and these are significantly involved in psychopathology.
I would propose that we have four basic models with 2 sub-models each. The four basic models are related to Life (where self and others or environment is not typically distinguished from each other), a Self model, an Other model and a World model.
Life-past-and-present: How do we view life that has already happened? If the experiences were mostly good, we see life as beautiful or benign; if the experiences were mostly bad, we view life as sucking or malevolent.
Life-yet-to-come: How do expect the future to be like? if we expect life to be full of adventure and hope we feel life is promising; if we expect life to be mostly downhill, we feel that life is bleak.
Self’s-impact-on-Env: How much control do we feel we have over our environment? Are we in control, can we chose our niches and are our efforts rewarded and effective? If yes we have feelings of positive self – esteem, otherwise we feel incompetent.
Env-Impact-on-self: Does our environment allow us any autonomy in regulating our behavior? Does it act for our benefit or to our advantage? If the environment provides unconditional positive regard, we develop positive self-worth and feel competent dealing with life’s curve-balls ; else we end up feeling worthless.
Others-same-as-me: Am I part of the In-group? If we are accepted as part of the ingroup, our needs of belonging are satisfied; else we feel lonely.
Others-different-than-me: Can I trust them? Will they trust me? After all they are an outgroup. If we are able to rise above our fears and distrust, our needs for connection are satisfied, else we remain isolated.
Physical-World:Is the physical universe lawful? is it determined and non-miraculous? If our precepts lead us to believe that we live in a lawful universe, we have a stable overarching schema; whenever we witness something not inline with the laws of nature, that schema goes for a toss.
Social-World: Is the social world predictable? do actions of people make sense or is there too much randomness? If the social world seems predictable and lawful in its own sense, then we can maintain a coherent worldview; else if we encounter too many behaviors or events of which we cannot make sense we risk becoming incoherent.
It is my contention that dysfunctional beliefs at each of these eight sub-models lead to different types of psychopathology. For eg. the Life model that say that life is malevolent/ bleak may lead to anxiety ; a Self model claiming that the self is worthless/ incompetent may lead to depression; while a World model were events/percepts don’t make sense and is incoherent/unstable may lead to psychosis.
And of course this may be mediated by early life experiences/ genetic propensities that give rise to differences in brain neurotransmitter systems. But a detailed model about that should be the subject of a new post.
“Between stimulus and response there is a space. In that space is our power to choose our response. In our response lies our growth and our freedom.” Victor E Frankl
As per the spiritual tradition of India, Mind (or Antahkaran) is made up of four functions or parts. These are Manas, Chitta, Ahamkar and Buddhi. These are typically translated as sensory-motor mind, memory bank, ego and intellect respectively. As an interesting aside, Buddha derives from the common root of Buddhi (budh- to know) and stands for the enlightened one.
Here is a brief description of the four functions:
Manas is ordinary, indeterminate thinking — just being aware that something is there and automatically registers the facts which the senses perceive.
The subconscious action, memory, etc., is caused by chitta. The function of chitta is chinta (contemplation), the faculty whereby the Mind in its widest sense raises for itself the subject of its thought and dwells thereon.
Buddhi determines, decides and logically comes to a conclusion that something is such-and-such a thing. That is another aspect of the operation of the psyche — buddhi or intellect. buddhi, on attending to such registration, discriminates, determines, and cognizes the object registered, which is set over and against the subjective self by aha?k?ra.
Ahamkara — ego, affirmation, assertion, ‘I know’. “I know that there is some object in front of me, and I also know that I know. I know that I am existing as this so-and-so.” This kind of affirmation attributed to one’s own individuality is the work of ahamkara, known as egoism.
There is also a hierarchical relation between these with Buddhi at the top and Manas at the bottom. Now, lets look at each of these more closely.
Manas, or sensory-motor mind, is not just registering stimulus but also responsible for executing actions and may be equated with the sensory/ motor cortical functions of the brain. It controls the 10 Indriyas (5 senses and 5 action-oriented faculties). Its important to note that Manas is doing both the functions associated with stimulus as well as the response, though its the first one when it comes to stimulus processing (registering the stimulus) and the last one when it comes to executing responses/actions ( it blindly executes the action that has been decided / chosen upstream). Of course one could just have a reflex action where a stimulus leads to response, but in majority of human action, there is a space between the two. That space is provided by the rest of the mind functions.
Chitta, or memory-prospecting mind, may be typically equated with the association cortex part pf the brain. Many refer to chitta as the memory or impressions bank, but forget to mention the future oriented part of it. Here is a quote:
The part of the Mind thinking and visualizing the objects, events and experiences from the past or the future (emphasis mine) is called the Chitta and this act is called Chintan.
Its thus evident that Chitta drives Manas not only based on past memories, but also based on future expectations or predictions. From brain studies , we know that the same part of the brain is used for memory as well as prospection. Chitta using past memories to drive manas (and thus behavior or motivated cognition) I view as being conditioned by classical conditioning processes. Chitaa using future expectations/ predictions to drive behavior and motivated cognition, I view as being conditioned by operant conditioning processes. In many philosophical and spiritual traditions, one of the aims is to get over (social) conditioning. Chitta hinders spiritual awakening by using habits, which is an integral pat of chitta function. The habits are nothing but the conditioning, but again one in stimulus path and the other in response/action path.
Ahamkara, or experiential-agentic self, may be typically equated with consciousness/ conscious and ego-driven self. It knows and say ‘I am’ Conscious entities typically have two functions- experience and agency. It is something it is to be like that conscious entity (experience) and the entity has volition or ability to do things (agency). The concept of self as a conscious entity that has experience (in the stimulus path) and agency (in the response/ action path) is important for this notion of ahamkara. With self comes concepts like real self and ideal self which drive and are driven by experience and agency respectively. The less the discrepancy between the two the better your spiritual growth. An interesting concept here is that of coloring or external decorations- your coloring or how you see your self do lead to downward impact on chitta and manas by contaminating the stimulus/ action.
Buddhi, or knowing-deciding mind, is the final frontier on your path to spirituality. The typical functions associated with Buddhi are knowing, discriminating, judging and deciding. I think knowing/ discriminating (between stimuli/ actions etc) is a stimulus path function, while judging/ deciding (between actions/ responses/ attending to a stimuli) is a response path function. However I also believe they converge to a great extent here or else we will have a problem of turtles all the way down. Once you start to see things as they are, you are also able to choose wisely. At least that is what the scriptures say and what Boddhisattvas aspire or achieve.
To me this increasingly fine-grained control of what we perceive and how we act , from the gross actions and perceptions of manas to the discriminating decisions of buddhi are very intuitively appealing and also appear to be grounded in psychological and neural processes.
Mindfulness (Buddhism based) has become all the rage nowadays, yet if we look at the spiritual traditions of India, perhaps while Yoga defined as Chitta vritti nirodaha (or “Yoga is the silencing of the modifications of the mind”) does refer to being in the present (here-and-now) and not to be disturbed by the perturbations of chitta (memories of past or expectations of future), one also needs to go beyond just Chitta vritti, to addressing the Ahamkara coloring and finally to try achieving the Buddha nature where there is little disparity in doing and being. (Mindfulness) Meditation needs to move beyond being curious, non-judgemental and in the present to where one doe shave a judgement function, but one that is perfectly attuned.
Let us see how they describe thought speed and variability and what their hypothesis is:
1. The principle of thought speed. Fast thinking, which involves many thoughts per unit time, generally produces positive affect. Slow thinking, which involves few thoughts per unit time, generally produces less positive affect. At the extremes of thought speed, racing thoughts can elicit feelings of mania, and sluggish thoughts can elicit feelings of depression.
2. The principle of thought variability. Varied thinking generally produces positive affect, whereas repetitive thinking generally produces negative affect. This principle is derived in part from the speed principle: when thoughts are repetitive, thought speed (thoughts per unit time) diminishes. At its extremes, repetitive thinking can elicit feelings of depression (or anxiety), and varied thinking can elicit feelings of mania (or reverie).
Let me clarify at the outset that they are aware of the effects of though speed on variability and vice versa; as well as the effects of mood on felt energy and vice versa; thus they know that one can confound the other. Another angle they consider is the relationship between thought speed/variability i.e the form of thought and the contents of thought (whether having emotional salience or neutral) and investigated whether the effects of speed and variability were confounded with though content; they found negative evidence for this inetrcationist view.
Let me also clarify that I differ slightly (based on my interpreation of their data) from their original hypothesis, in the sense that I believe that their data shows that speed affects felt energy and variability affects affect and that the effects of speed on mood may be mediated by the effect of speed on felt energy and similarly the effect of variability on felt energy may be mediated by its effects on mood.
Thus my claim is that:
- Thought speed leads to more felt energy. Extremes of ‘racing thoughts’ leads to the manic feeling of being very energetic (when accompanied with positive mood, this may give rise to feelings of grandiosity- I have the energy to achieve anything), while also may lead to anxiety states (when accompanied with negative affect) in which one cannot really suppress a negative chain of thoughts – one following the other in fast succession, regarding the object of ones anxiety. The counterpart to this the state where thoughts come slowly (writer’s block etc) and when accompanied with negative affect, this can easily be viewed as depression.
- Thought variability leads to more positive affect: Extremes of ‘tangential thoughts’ leads to the manic feeling of being in a good mood (when accompanied with high energy , this manifest as feelings of euphoria); while the same tangential thoughts when accompanied by low felt energy may actually be felt as serenity/ calmness/ reverie. The counterpart to this is the state of thoughts that are stuck in a rut – when accompanied with low energy this leads to feelings of depression and sadness.
Thus, to put simply : there are two dimensions one needs to take care of – mood (thought variability) x energy (thought speed) and high and low extremes on these dimensions are all opposites of their counterpart.
Before we move on, I’ll let the authors present their other two claims too:
3. The combination principle. Fast, varied thinking prompts elation; slow, repetitive thinking prompts dejection. When speed and variability oppose each other, such that one is low and the other high, individuals’ affective experience will depend on factors including which one of the two factors is more extreme. The psychological state elicited by such combinations can vary apart from its valence, as shown in Figure 1. For example, repetitive thinking can elicit feelings of anxiety rather than depression if that repetitive thinking is rapid. Notably, anxious states generally are more energetic than depressive states. Moreover, just as fast-moving physical objects possess more energy than do identical slower objects, fast thinking involves more energy (e.g., greater wakefulness, arousal, and feelings of energy) than does slow thinking.
4. The content independence principle. Effects of thought speed and variability are independent of the specific nature of thought content. Powerful affective states such as depression and anxiety have been traced to irrational and dysfunctional cognitions (e.g., Beck, 1976). According to the independence principle, effects of mental motion on mood do not require any particular type of thought content.
They review a number of factors and studies that all point to a causal link between thought speed and energy and between thought variability and mood. More importantly they show the independent effects of though speed and variability from the effects of thought content on mood. I’ll not go into the details of the studies and experiments they performed, as their article is available freely online and one can read for oneself (it makes for excellent reading); suffice it to say that I believe they are on the right track and have evidence to back their claims.
What are the implications of this:
The speed and repetition of thoughts, we suggest, could be manipulated in order to alter and alleviate some of the mood and energy symptoms of mental disorders. The slow and repetitive aspects of depressive thinking, for example, seem to contribute to the disorder’s affective symptoms (e.g., Ianzito et al., 1974; Judd et al., 1994; Nolen-Hoeksema, 1991; Philipp et al., 1991; Segerstrom et al., 2000). Thus, techniques that are effective in speeding cognition and in breaking the cycle of repetitive thought may be useful in improving the mood and energy levels of depressed patients. The potential of this sort of treatment is suggested by Pronin and Wegner’s (2006) study, in which speeding participants’ cognitions led to improved mood and energy, even when those cognitions were negative, self-referential, and decidedly depressing. It also is suggested by Gortner et al.’s (2006) finding that an expressive writing manipulation that decreased rumination (even while inducing thoughts about an upsetting experience) rendered recurrent depression less likely.
There also is some evidence suggesting that speeding up even low-level cognition may improve mood in clinically depressed patients. In one experiment, Teasdale and Rezin (1978) instructed depressed participants to repeat aloud one of four letters of the alphabet (A, B, C, or D) presented in random order every 1, 2, or 4 s. They found that those participants required to repeat the letters at the fastest rate experienced the most reduction in depressed mood. Similar techniques could be tested for the treatment of other mental illnesses. For example, manipulations might be designed to decrease the mental motion of manic patients, perhaps by introducing repetitive and slow cognitive stimuli. Or, in the case of anxiety disorders, it would be worthwhile to test interventions aimed at inducing slow and varied thought (as opposed to the fast and repetitive thought characteristic of anxiety). The potential effectiveness of such interventions is supported by the fact that mindfulness meditation, which involves slow but varied thinking, can lessen anxiety, stress, and arousal.
hat tip: Ulterior Motives
Pronin, E., & Jacobs, E. (2008). Thought Speed, Mood, and the Experience of Mental Motion Perspectives on Psychological Science, 3 (6), 461-485 DOI: 10.1111/j.1745-6924.2008.00091.x
Pronin, E., & Wegner, D. (2006). Manic Thinking: Independent Effects of Thought Speed and Thought Content on Mood Psychological Science, 17 (9), 807-813 DOI: 10.1111/j.1467-9280.2006.01786.x
This article continues my series on major conscious and unconscious processes in the brain. In my last two posts I have talked about 8 major unconscious processes in the brain viz sensory, motor, learning , affective, cognitive (deliberative), modelling, communications and attentive systems. Today, I will not talk about brain in particular, but will approach the problem from a slightly different problem domain- that of modelling/implementing an artificial brain/ mind.
I am a computer scientist, so am vaguely aware of the varied approaches used to model/implement the brain. Many of these use computers , though not every approach assumes that the brain is a computer.
Before continuing I would briefly like to digress and link to one of my earlier posts regarding the different traditions of psychological research in personality and how I think they fit an evolutionary stage model . That may serve as a background to the type of sweeping analysis and genralisation that I am going to do. To be fair it is also important to recall an Indian parable of how when asked to describe an elephant by a few blind man each described what he could lay his hands on and thus provided a partial and incorrect picture of the elephant. Some one who grabbed the tail, described it as snake-like and so forth.
With that in mind let us look at the major approaches to modelling/mplementing the brain/intelligence/mind. Also remember that I am most interested in unconscious brain processes till now and sincerely believe that all the unconscious processes can, and will be successfully implemented in machines. I do not believe machines will become sentient (at least any time soon), but that question is for another day.
So, with due thanks to @wildcat2030, I came across this book today and could immediately see how the different major approaches to artificial robot brains are heavily influenced (and follow) the evolutionary first five stages and the first five unconscious processes in the brain.
The book in question is ‘Robot Brains: Circuits and Systems for Conscious Machines’ by Pentti O. Haikonen and although he is most interested in conscious machines I will restrict myself to intelligent but unconscious machines/robots.
The first chapter of the book (which has made to my reading list) is available at Wiley site in its entirety and I quote extensively from there:
Presently there are five main approaches to the modelling of cognition that could be used for the development of cognitive machines: the computational approach (artificial intelligence, AI), the artificial neural networks approach, the dynamical systems approach, the quantum approach and the cognitive approach. Neurobiological approaches exist, but these may be better suited for the eventual explanation of the workings of the biological brain.
The computational approach (also known as artificial intelligence, AI) towards thinking machines was initially worded by Turing (1950). A machine would be thinking if the results of the computation were indistinguishable from the results of human thinking. Later on Newell and Simon (1976) presented their Physical Symbol System Hypothesis, which maintained that general intelligent action can be achieved by a physical symbol system and that this system has all the necessary and sufficient means for this purpose. A physical symbol system was here the computer that operates with symbols (binary words) and attached rules that stipulate which symbols are to follow others. Newell and Simon believed that the computer would be able to reproduce human-like general intelligence, a feat that still remains to be seen. However, they realized that this hypothesis was only an empirical generalization and not a theorem that could be formally proven. Very little in the way of empirical proof for this hypothesis exists even today and in the 1970s the situation was not better. Therefore Newell and Simon pretended to see other kinds of proof that were in those days readily available. They proposed that the principal body of evidence for the symbol system hypothesis was negative evidence, namely the absence of specific competing hypotheses; how else could intelligent activity be accomplished by man or machine? However, the absence of evidence is by no means any evidence of absence. This kind of ‘proof by ignorance’ is too often available in large quantities, yet it is not a logically valid argument. Nevertheless, this issue has not yet been formally settled in one way or another. Today’s positive evidence is that it is possible to create world-class chess-playing programs and these can be called ‘artificial intelligence’. The negative evidence is that it appears to be next to impossible to create real general intelligence via preprogrammed commands and computations.
The original computational approach can be criticized for the lack of a cognitive foundation. Some recent approaches have tried to remedy this and consider systems that integrate the processes of perception, reaction, deliberation and reasoning (Franklin, 1995, 2003; Sloman, 2000). There is another argument against the computational view of the brain. It is known that the human brain is slow, yet it is possible to learn to play tennis and other activities that require instant responses. Computations take time. Tennis playing and the like would call for the fastest computers in existence. How could the slow brain manage this if it were to execute computations?
The artificial neural networks approach, also known as connectionism, had its beginnings in the early 1940s when McCulloch and Pitts (1943) proposed that the brain cells, neurons, could be modelled by a simple electronic circuit. This circuit would receive a number of signals, multiply their intensities by the so-called synaptic weight values and sum these modified values together. The circuit would give an output signal if the sum value exceeded a given threshold. It was realized that these artificial neurons could learn and execute basic logic operations if their synaptic weight values were adjusted properly. If these artificial neurons were realized as hardware circuits then no programs would be necessary and biologically plausible artificial replicas of the brain might be possible. Also, neural networks operate in parallel, doing many things simultaneously. Thus the overall operational speed could be fast even if the individual neurons were slow. However, problems with artificial neural learning led to complicated statistical learning algorithms, ones that could best be implemented as computer programs. Many of today’s artificial neural networks are statistical pattern recognition and classification circuits. Therefore they are rather removed from their original biologically inspired idea. Cognition is not mere classification and the human brain is hardly a computer that executes complicated synaptic weight-adjusting algorithms.
The human brain has some 10 to the power of 11 neurons and each neuron may have tens of thousands of synaptic inputs and input weights. Many artificial neural networks learn by tweaking the synaptic weight values against each other when thousands of training examples are presented. Where in the brain would reside the computing process that would execute synaptic weight adjusting algorithms? Where would these algorithms have come from? The evolutionary feasibility of these kinds of algorithms can be seriously doubted. Complicated algorithms do not evolve via trial and error either. Moreover, humans are able to learn with a few examples only, instead of having training sessions with thousands or hundreds of thousands of examples. It is obvious that the mainstream neural networks approach is not a very plausible candidate for machine cognition although the human brain is a neural network.
Dynamical systems were proposed as a model for cognition by Ashby (1952) already in the 1950s and have been developed further by contemporary researchers (for example Thelen and Smith, 1994; Gelder, 1998, 1999; Port, 2000; Wallace, 2005). According to this approach the brain is considered as a complex system with dynamical interactions with its environment. Gelder and Port (1995) define a dynamical system as a set of quantitative variables, which change simultaneously and interdependently over quantitative time in accordance with some set of equations. Obviously the brain is indeed a large system of neuron activity variables that change over time. Accordingly the brain can be modelled as a dynamical system if the neuron activity can be quantified and if a suitable set of, say, differential equations can be formulated. The dynamical hypothesis sees the brain as comparable to analog feedback control systems with continuous parameter values. No inner representations are assumed or even accepted. However, the dynamical systems approach seems to have problems in explaining phenomena like ‘inner speech’. A would-be designer of an artificial brain would find it difficult to see what kind of system dynamics would be necessary for a specific linguistically expressed thought. The dynamical systems approach has been criticized, for instance by Eliasmith (1996, 1997), who argues that the low dimensional systems of differential equations, which must rely on collective parameters, do not model cognition easily and the dynamicists have a difficult time keeping arbitrariness from permeating their models. Eliasmith laments that there seems to be no clear ways of justifying parameter settings, choosing equations, interpreting data or creating system boundaries. Furthermore, the collective parameter models make the interpretation of the dynamic system’s behaviour difficult, as it is not easy to see or determine the meaning of any particular parameter in the model. Obviously these issues would translate into engineering problems for a designer of dynamical systems.
The quantum approach maintains that the brain is ultimately governed by quantum processes, which execute nonalgorithmic computations or act as a mediator between the brain and an assumed more-or-less immaterial ‘self’ or even ‘conscious energy field’ (for example Herbert, 1993; Hameroff, 1994; Penrose, 1989; Eccles, 1994). The quantum approach is supposed to solve problems like the apparently nonalgorithmic nature of thought, free will, the coherence of conscious experience, telepathy, telekinesis, the immortality of the soul and others. From an engineering point of view even the most practical propositions of the quantum approach are presently highly impractical in terms of actual implementation. Then there are some proposals that are hardly distinguishable from wishful fabrications of fairy tales. Here the quantum approach is not pursued.
The cognitive approach maintains that conscious machines can be built because one example already exists, namely the human brain. Therefore a cognitive machine should emulate the cognitive processes of the brain and mind, instead of merely trying to reproduce the results of the thinking processes. Accordingly the results of neurosciences and cognitive psychology should be evaluated and implemented in the design if deemed essential. However, this approach does not necessarily involve the simulation or emulation of the biological neuron as such, instead, what is to be produced is the abstracted information processing function of the neuron.
A cognitive machine would be an embodied physical entity that would interact with the environment. Cognitive robots would be obvious applications of machine cognition and there have been some early attempts towards that direction. Holland seeks to provide robots with some kind of consciousness via internal models (Holland and Goodman, 2003; Holland, 2004). Kawamura has been developing a cognitive robot with a sense of self (Kawamura, 2005; Kawamura et al., 2005). There are also others. Grand presents an experimentalist’s approach towards cognitive robots in his book (Grand, 2003).
A cognitive machine would be a complete system with processes like perception, attention, inner speech, imagination, emotions as well as pain and pleasure. Various technical approaches can be envisioned, namely indirect ones with programs, hybrid systems that combine programs and neural networks, and direct ones that are based on dedicated neural cognitive architectures. The operation of these dedicated neural cognitive architectures would combine neural, symbolic and dynamic elements.
However, the neural elements here would not be those of the traditional neural networks; no statistical learning with thousands of examples would be implied, no backpropagation or other weight-adjusting algorithms are used. Instead the networks would be associative in a way that allows the symbolic use of the neural signal arrays (vectors). The ‘symbolic’ here does not refer to the meaning-free symbol manipulation system of AI; instead it refers to the human way of using symbols with meanings. It is assumed that these cognitive machines would eventually be conscious, or at least they would reproduce most of the folk psychology hallmarks of consciousness (Haikonen, 2003a, 2005a). The engineering aspects of the direct cognitive approach are pursued in this book.
Now to me these computational approaches are all unidimensional-
- The computational approach is suited for symbol-manipulation and information-represntation and might give good results when used in systems that have mostly ‘sensory’ features like forming a mental represntation of external world, a chess game etc. Here something (stimuli from world) is represented as something else (an internal symbolic represntation).
- The Dynamical Systems approach is guided by interactions with the environment and the principles of feedback control systems and also is prone to ‘arbitrariness’ or ‘randomness’. It is perfectly suited to implement the ‘motor system‘ of brain as one of the common features is apparent unpredictability (volition) despite being deterministic (chaos theory) .
- The Neural networks or connectionsim is well suited for implementing the ‘learning system’ of the brain and we can very well see that the best neural network based systems are those that can categorize and classify things just like ‘the learning system’ of the brain does.
- The quantum approach to brain, I haven’t studied enough to comment on, but the action-tendencies of ‘affective system’ seem all too similar to the superimposed,simultaneous states that exits in a wave function before it is collapsed. Being in an affective state just means having a set of many possible related and relevant actions simultaneously activated and then perhaps one of that decided upon somehow and actualized. I’m sure that if we could ever model emotion in machine sit would have to use quantum principles of wave functions, entanglemnets etc.
- The cognitive approach, again I haven’t go a hang of yet, but it seems that the proposal is to build some design into the machine that is based on actual brain and mind implemntations. Embodiment seems important and so does emulating the information processing functions of neurons. I would stick my neck out and predict that whatever this cognitive approach is it should be best able to model the reasoning and evaluative and decision-making functions of the brain. I am reminded of the computational modelling methods, used to functionally decompose a cognitive process, and are used in cognitive science (whether symbolic or subsymbolic modelling) which again aid in decision making / reasoning (see wikipedia entry)
Overall, I would say there is room for further improvement in the way we build more intelligent machines. They could be made such that they have two models of world – one deterministic , another chaotic and use the two models simulatenously (sixth stage of modelling); then they could communicate with other machines and thus learn language (some simulation methods for language abilities do involve agents communicating with each other using arbitrary tokens and later a language developing) (seventh stage) and then they could be implemented such that they have a spotlight of attention (eighth stage) whereby some coherent systems are amplified and others suppressed. Of course all this is easier said than done, we will need at least three more major approaches to modelling and implementing brain/intelligence before we can model every major unconscious process in the brain. To model consciousness and program sentience is an uphill task from there and would definitely require a leap in our understandings/ capabilities.
Do tell me if you find the above reasonable and do believe that these major approaches to artificial brain implementation are guided and constrained by the major unconscious processes in the brain and that we can learn much about brain from the study of these artificial approaches and vice versa.
Today I plan to touch upon the topic of consciousness (from which many bloggers shy) and more broadly try to delineate what I believe are the important different conscious and unconscious processes in the brain. I will be heavily using my evolutionary stages model for this.
To clarify myself at the very start , I do not believe in a purely reactive nature of organisms; I believe that apart from reacting to stimuli/world; they also act , on their own, and are thus agents. To elaborate, I believe that neuronal groups and circuits may fire on their own and thus lead to behavior/ action. I do not claim that this firing is under voluntary/ volitional control- it may be random- the important point to note is that there is spontaneous motion.
- Sensory system: So to start with I propose that the first function/process the brain needs to develop is to sense its surroundings. This is to avoid predators/ harm in general. this sensory function of brain/sense organs may be unconscious and need not become conscious- as long as an animal can sense danger, even though it may not be aware of the danger, it can take appropriate action – a simple ‘action’ being changing its color to merge with background.
- Motor system:The second function/ process that the brain needs to develop is to have a system that enables motion/movement. This is primarily to explore its environment for food /nutrients. Preys are not going to walk in to your mouth; you have to move around and locate them. Again , this movement need not be volitional/conscious – as long as the animal moves randomly and sporadically to explore new environments, it can ‘see’ new things and eat a few. Again this ‘seeing’ may be as simple as sensing the chemical gradient in a new environmental.
- Learning system: The third function/process that the brain needs to develop is to have a system that enables learning. It is not enough to sense the environmental here-and-now. One needs to learn the contingencies in the world and remember that both in space and time. I am inclined to believe that this is primarily pavlovaion conditioning and associative learning, though I don’t rule out operant learning. Again this learning need not be conscious- one need not explicitly refer to a memory to utilize it- unconscious learning and memory of events can suffice and can drive interactions. I also believe that need for this function is primarily driven by the fact that one interacts with similar environments/con specifics/ predators/ preys and it helps to remember which environmental conditions/operant actions lead to what outcomes. This learning could be as simple as stimuli A predict stimuli B and/or that action C predicts reward D .
- Affective/ Action tendencies system .The fourth function I propose that the brain needs to develop is a system to control its motor system/ behavior by making it more in sync with its internal state. This I propose is done by a group of neurons monitoring the activity of other neurons/visceral organs and thus becoming aware (in a non-conscious sense)of the global state of the organism and of the probability that a particular neuronal group will fire in future and by thus becoming aware of the global state of the organism , by their outputs they may be able to enable one group to fire while inhibiting other groups from firing. To clarify by way of example, some neuronal groups may be responsible for movement. Another neuronal group may be receiving inputs from these as well as say input from gut that says that no movement has happened for a time and that the organism has also not eaten for a time and thus is in a ‘hungry’ state. This may prompt these neurons to fire in such a way that they send excitatory outputs to the movement related neurons and thus biasing them towards firing and thus increasing the probability that a motion will take place and perhaps the organism by indulging in exploratory behavior may be able to satisfy hunger. Of course they will inhibit other neuronal groups from firing and will themselves stop firing when appropriate motion takes place/ a prey is eaten. Again nothing of this has to be conscious- the state of the organism (like hunger) can be discerned unconsciously and the action-tendencies biasing foraging behavior also activated unconsciously- as long as the organism prefers certain behaviors over others depending on its internal state , everything works perfectly. I propose that (unconscious) affective (emotional) state and systems have emerged to fulfill exactly this need of being able to differentially activate different action-tendencies suited to the needs of the organism. I also stick my neck out and claim that the activation of a particular emotion/affective system biases our sensing also. If the organism is hungry, the food tastes (is unconsciously more vivid) better and vice versa. thus affects not only are action-tendencies , but are also, to an extent, sensing-tendencies.
- Decisional/evaluative system: the last function (for now- remember I adhere to eight stage theories- and we have just seen five brain processes in increasing hierarchy) that the brain needs to have is a system to decide / evaluate. Learning lets us predict our world as well as the consequences of our actions. Affective systems provide us some control over our behavior and over our environment- but are automatically activated by the state we are in. Something needs to make these come together such that the competition between actions triggered due to the state we are in (affective action-tendencies) and the actions that may be beneficial given the learning associated with the current stimuli/ state of the world are resolved satisfactorily. One has to balance the action and reaction ratio and the subjective versus objective interpretation/ sensation of environment. The decisional/evaluative system , I propose, does this by associating values with different external event outcomes and different internal state outcomes and by resolving the trade off between the two. This again need not be conscious- given a stimuli predicting a predator in vicinity, and the internal state of the organism as hungry, the organism may have attached more value to ‘avoid being eaten’ than to ‘finding prey’ and thus may not move, but camouflage. On the other hand , if the organisms value system is such that it prefers a hero’s death on battlefield , rather than starvation, it may move (in search of food) – again this could exist in the simplest of unicellular organisms.
Of course all of these brain processes could (and in humans indeed do) have their conscious counterparts like Perception, Volition,episodic Memory, Feelings and Deliberation/thought. That is a different story for a new blog post!
And of course one can also conceive the above in pure reductionist form as a chain below:
sense–>recognize & learn–>evaluate options and decide–>emote and activate action tendencies->execute and move.
and then one can also say that movement leads to new sensation and the above is not a chain , but a part of cycle; all that is valid, but I would sincerely request my readers to consider the possibility of spontaneous and self-driven behavior as separate from reactive motor behavior.