When I first heard of the book title ” Why Quitters Win: Decide to be excellent“, to say the least, I was very much intrigued. Was Nick trying to say something like stop doing something mid-way if you know that it is going to fail- and ignore the sunk costs…or was it about quitting when faced with unreasonable odds- rather than doubling your efforts and commitment. I believe in sticking with the choices you make, till you have given it your last shot, and so was slightly apprehensive.
However, what Nick Tasler means, is not about starting many things sequentially, and then quitting them early, if they are likely to fail; but what he means, instead, in a broad sense, is not starting off and getting absorbed in too many parallel threads, in the first place- but defining a theme or decision pulse and sticking with it and let it guide your day-to-day decisions; and also actively quitting doing the million other things that are not inline with that main theme / decision pulse.
To elaborate, the book is about advice in a business/ organizational scenario, where an organization, should spend time to spell out its one-time decision pulse- a guiding value that enables managers at all levels to determine for themselves as to whether the decision they will take will be for the good of the organization or not (is in harmony with the decision pulse or not). Seems like a reasonable and obvious advice , but only in hindsight. Practically, it’s very difficult to determine what exactly is/ should be one’s guiding value. And then what is even more difficult is to focus on that one value/ principle and *stop* doing/ being driven by other values/ value propositions.
Easier said then done. Nick proposes a three-step guiding cheat sheet: Know:( find out/ define your decision pulse); Think ( appraise action-plan in light of decision pulse and also taking alternative scenarios and contrary views into account.) and Do (execute by getting everyone aligned with single focus and take action rather than falling into the trap of making a decision either way by stalling or not acting/ deciding). And quitting other options/ burning bridges behind is important at each step. For e.g. your vision/ decision pulse cannot be vague or over-inclusive- it has to be sharp and concrete enough and focus on one thing and consciously exclude other options- so that it is useful when decisions involve tradeoffs between competing values- as they always do in real world scenarios. .Also, while its important to have action plans, its more important to have a non-action plan: given your new priorities and direction, what are the things you need to stop doing- given that taking up something new and fitting in your day-to-day schedule would force time away from some other activities. Lastly, when executing its best to leave plan B’s foreclosed- for success of plan A, Plan B and Plan C must be sacrificed.
Nick has enough evidence based studies to back his proposition, but the way he goes around elaborating these themes is by taking use of anecdotes and business case studies, which make for engaging reading. Illustrating for e.g. , how Starbucks , whose primary value proposition was being a coffee place, was sort of getting waylaid by having cheese sandwiches as breakfast, and whose cheesy aroma spoiled the coffee aroma, and how the Starbucks founder used the guiding value to put an end to the lucrative breakfast/ sandwich business to realign the Starbucks with its roots; is illuminating and makes the principles involved clear.
The book is full of such illuminating examples, which makes one see the power of these ‘quitting’ actions, in action and make one appreciate the theory and ideas in light of real world historical examples.
The book is an absorbing and light read, and is sure to grip you till the end. In the last chapter, Nick also elaborates how the same strategic framework can be applied to personal planning and self-development. He list support for some eight universal personal values and how one should ideally choose one of those values and let all one’s personal decision be guided by that value. I could fit those eight values in my ABCD and fundamental four frameworks and would like to spell them out here for the benefit of the readers:
they are sort of eight values, a pair slightly opposed to each other:
1. Security- Freedom (pain-pleasure Affect based polarity)
2. Stimulation- Authority (active – passive Behavior based polarity)
3. Achievement- Relationships (self-other Drive/ motivation based polarity)
4. Power – Humanity (broad- narrow Cognition based polarity)
Of course, this is just a peripheral part of what Nick’s book is about, but it resonated with me most.
Lastly, I am at a stage in my life, where , although I do have a guiding decision pulse i.e. ” anythings and everything that helps me achieve and leverage positive psychology based knowledge and interventions in workplace and school settings” I am still too broadly spread: for e.g I am doing a plethora of MOOCs ranging from topics related to management and leadership , to evolution and genetics, and to psychology and neuroscience. Also, I simultaneously manage a full-time job, read a lot of psychology books , do book reviews, am writing a psychology book of my own and have 3-4 active blogs, to which I should contribute on regular basis. I am planning on attending a 15-day cognition workshop in near future. On top of this I pride myself as curator and share stuff on scoop.it, twitter, Facebook etc. I definitely needed the advice Nick has so timely provided- to make a non-action plan and quit doing somethings.
It’s rare for me to proclaim books as life changing- but this book does seem to be right up the alley- I can’t vouch for you, but at least I am planning to apply its principles to my life in earnest- and am sure that it will be a life changing experience. Thanks Nick for writing this book and sharing it so graciously with me for review. Hope many more people get to be aware of your ideas and are able to apply them to their lives.
Bozo Sapiens :why to err is human, is a book that tries to document the frailties of our decision making process and the underlying psychological mechanisms behind them.
Written with a lay audience in mind,it is written in an easy to read manner and is fun to read. As per the site, it is in the tradition of books like Blink and Stumbling on happiness and plans to cater to the same market segment of people who are interested in psychology and how it affects day-to-day lives. while most of the psychology studies were already familiar to me, they would be novel for a lay audience and would definitely interest and entertain and also inform and guide. I,myself, cam across a few new and worthwhile studies and feel enriched having been made aware of them. As is prone to writing for a popular audience, the Kaplans often gloss over or do not highlight all the subtleties involved, but it must go to their credit that they are able to explain the studies lucidly and clearly,without significantly diluting on the scientese involved. the only peeve I have is that the sections and studies covered in them somehow felt unconnected and not flowing in a smooth manner from one to the other.
The organization of the chapters is decent- one chapter focusing on perceptual errors, another on action-based errors while yet another on errors based on group mentality. The section on perception seemed to me better and the section on groups perhaps the weakest. Despite its title it is not a bleak view of humanity and knowing our heuristics, biases and design features/bugs will only help us act better. It is an easy read and perhaps would be savored by those who do have a general interest in psychology; for the experts there are some nuggets spread here-and-there and that may make it worthwhile skimming through the book.
Disclaimer: I received a free e-copy of the book for review.
PS: would my readers like to see more book reviews featured on the mouse trap ? some books that I would love to review and highlight include books by Nettle : happiness, personality; gazzaniga: mind’s past riddley: genome, nature via nurture etc etc. Do let me kno wvia commnets/ skribit suggestions using left sidebar.
Major conscious and unconscious processes in the brain: part 5: Physical substrates of A-cosnciousness
Today , I would like to point to a few physical models and theories of consciousness that have been proposed that show that consciousness still resides in the brain, although the neural/ supportive processes may be more esoteric.
I should forewarn before hand that all the theories involve advanced understanding of brains/ physics/ biochemistry etc and that I do not feel qualified enough to understand/ explain all the different theories in their entirety (or even have a surface understanding of them) ; yet , I believe that there are important underlying patterns and that applying the eight stage model to these approaches will only help us further understand and predict and search in the right directions. The style of this post is similar to the part 3 post on robot minds that delineated the different physical approaches that are used to implement intelligence/ brains in machines.
With that as a background, let us look at the major theoretical approaches to locate consciousness and define its underlying substrates. I could find six different physical hypothesis about consciousness on the Wikipedia page:
- * Orch-OR theory
- * Electromagnetic theories of consciousness
- * Holonomic brain theory
- * Quantum mind
- * Space-time theories of consciousness
- * Simulated Reality
Now let me briefly introduce each of the theories and where they seem to have been most successful; again I believe that though this time visually-normal people are perceiving the elephant, yet they are hooked on to its different aspects and need to bind their perspectives together to arrive at the real nature of the elephant.
The Orch OR theory combines Penrose’s hypothesis with respect to the Gödel theorem with Hameroff’s hypothesis with respect to microtubules. Together, Penrose and Hameroff have proposed that when condensates in the brain undergo an objective reduction of their wave function, that collapse connects to non-computational decision taking/experience embedded in the geometry of fundamental spacetime.
The theory further proposes that the microtubules both influence and are influenced by the conventional activity at the synapses between neurons. The Orch in Orch OR stands for orchestrated to give the full name of the theory Orchestrated Objective Reduction. Orchestration refers to the hypothetical process by which connective proteins, known as microtubule associated proteins (MAPs) influence or orchestrate the quantum processing of the microtubules.
Hameroff has proposed that condensates in microtubules in one neuron can link with other neurons via gap junctions. In addition to the synaptic connections between brain cells, gap junctions are a different category of connections, where the gap between the cells is sufficiently small for quantum objects to cross it by means of a process known as quantum tunnelling. Hameroff proposes that this tunnelling allows a quantum object, such as the Bose-Einstein condensates mentioned above, to cross into other neurons, and thus extend across a large area of the brain as a single quantum object.
He further postulates that the action of this large-scale quantum feature is the source of the gamma (40 Hz) synchronisation observed in the brain, and sometimes viewed as a correlate of consciousness . In support of the much more limited theory that gap junctions are related to the gamma oscillation, Hameroff quotes a number of studies from recent year.
From the point of view of consciousness theory, an essential feature of Penrose’s objective reduction is that the choice of states when objective reduction occurs is selected neither randomly, as are choices following measurement or decoherence, nor completely algorithmically. Rather, states are proposed to be selected by a ‘non-computable’ influence embedded in the fundamental level of spacetime geometry at the Planck scale.
Penrose claimed that such information is Platonic, representing pure mathematical truth, aesthetic and ethical values. More than two thousand years ago, the Greek philosopher Plato had proposed such pure values and forms, but in an abstract realm. Penrose placed the Platonic realm at the Planck scale. This relates to Penrose’s ideas concerning the three worlds: physical, mental, and the Platonic mathematical world. In his theory, the physical world can be seen as the external reality, the mental world as information processing in the brain and the Platonic world as the encryption, measurement, or geometry of fundamental spacetime that is claimed to support non-computational understanding.
To me it seems that Orch OR theory is more suitable for forming platonic representations of objects – that is invariant/ideal perception of an object. This I would relate to the Perceptual aspect of A-consciousness.
The electromagnetic field theory of consciousness is a theory that says the electromagnetic field generated by the brain (measurable by ECoG) is the actual carrier of conscious experience.
The starting point for these theories is the fact that every time a neuron fires to generate an action potential and a postsynaptic potential in the next neuron down the line, it also generates a disturbance to the surrounding electromagnetic (EM) field. Information coded in neuron firing patterns is therefore reflected into the brain’s EM field. Locating consciousness in the brain’s EM field, rather than the neurons, has the advantage of neatly accounting for how information located in millions of neurons scattered throughout the brain can be unified into a single conscious experience (sometimes called the binding problem): the information is unified in the EM field. In this way EM field consciousness can be considered to be ‘joined-up information’.
However their generation by synchronous firing is not the only important characteristic of conscious electromagnetic fields — in Pockett’s original theory, spatial pattern is the defining feature of a conscious (as opposed to a non-conscious) field.
In McFadden’s cemi field theory, the brain’s global EM field modifies the electric charges across neural membranes and thereby influences the probability that particular neurons will fire, providing a feed-back loop that drives free will.
To me, the EM filed theories seem to be right on track regarding the fact that the EM filed itself may modify / affect the probabilities of firing of individual neurons and thus lead to free will or sense of agency by in some sense causing some neurons to fire over others. I believe we can model the agency aspect of A-consciousness and find neural substrates of the same in brain, using this approach.
The holonomic brain theory, originated by psychologist Karl Pribram and initially developed in collaboration with physicist David Bohm, is a model for human cognition that is drastically different from conventionally accepted ideas: Pribram and Bohm posit a model of cognitive function as being guided by a matrix of neurological wave interference patterns situated temporally between holographic Gestalt perception and discrete, affective, quantum vectors derived from reward anticipation potentials.
Pribram was originally struck by the similarity of the hologram idea and Bohm’s idea of the implicate order in physics, and contacted him for collaboration. In particular, the fact that information about an image point is distributed throughout the hologram, such that each piece of the hologram contains some information about the entire image, seemed suggestive to Pribram about how the brain could encode memories.
According to Pribram, the tuning of wave frequency in cells of the primary visual cortex plays a role in visual imaging, while such tuning in the auditory system has been well established for decades. Pribram and colleagues also assert that similar tuning occurs in the somatosensory cortex.
Pribram distinguishes between propagative nerve impulses on the one hand, and slow potentials (hyperpolarizations, steep polarizations) that are essentially static. At this temporal interface, he indicates, the wave interferences form holographic patterns.
To me, the holnomic approach seems to be the phenomenon lying between gestalt perception and quantum vectors derived from reward-anticipation potentials or in simple English between the perception and agency components of A-consciousness. this is the Memory aspect of A-consciousness. The use of hologram used to store information as a model, the use of slow waves that are tuned to carry information, the use of this model to explain memory formation (including hyperpolarization etc) all point to the fact that this approach will be most successful in explaining the autobiographical memory that is assited wuith A-cosnciousness.
The quantum mind hypothesis proposes that classical mechanics cannot fully explain consciousness and suggests that quantum mechanical phenomena such as quantum entanglement and superposition may play an important part in the brain’s function and could form the basis of an explanation of consciousness.
Recent papers by physicist, Gustav Bernroider, have indicated that he thinks that Bohm’s implicate-explicate structure can account for the relationship between neural processes and consciousness. In a paper published in 2005 Bernroider elaborated his proposals for the physical basis of this process. The main thrust of his paper was the argument that quantum coherence may be sustained in ion channels for long enough to be relevant for neural processes and the channels could be entangled with surrounding lipids and proteins and with other channels in the same membrane. Ion channels regulate the electrical potential across the axon membrane and thus play a central role in the brain’s information processing.
Bernroider uses this recently revealed structure to speculate about the possibility of quantum coherence in the ion channels. Bernroider and co-author Sisir Roy’s calculations suggested to them that the behaviour of the ions in the K channel could only be understood at the quantum level. Taking this as their starting point, they then ask whether the structure of the ion channel can be related to logic states. Further calculations lead them to suggest that the K+ ions and the oxygen atoms of the binding pockets are two quantum-entangled sub-systems, which they then equate to a quantum computational mapping. The ions that are destined to be expelled from the channel are proposed to encode information about the state of the oxygen atoms. It is further proposed the separate ion channels could be quantum entangled with one another.
To me, the quantum entanglement (or bond between different phenomenons)and the encoding of information about the state of the system in that entanglement seems all too similar to feelings as information about the emotional/bodily state. Thus, I propose that these quantum entanglements in these ion-channels may be the substrate that give rise to access to the state of the system, thus giving rise to feelings that is the feeling component of A-consciousness i.e access to one’s own emotional states.
Space-time theories of consciousness have been advanced by Arthur Eddington, John Smythies and other scientists. The concept was also mentioned by Hermann Weyl who wrote that reality is a “…four-dimensional continuum which is neither ‘time’ nor ‘space’. Only the consciousness that passes on in one portion of this world experiences the detached piece which comes to meet it and passes behind it, as history, that is, as a process that is going forward in time and takes place in space”.
In 1953, CD Broad, in common with most authors in this field, proposed that there are two types of time, imaginary time measured in imaginary units (i) and real time measured on the real plane.
It can be seen that for any separation in 3D space there is a time at which the separation in 4D spacetime is zero. Similarly, if another coordinate axis is introduced called ‘real time’ that changes with imaginary time then historical events can also be no distance from a point. The combination of these result in the possibility of brain activity being at a point as well as being distributed in 3D space and time. This might allow the conscious individual to observe things, including whole movements, as if viewing them from a point.
Alex Green has developed an empirical theory of phenomenal consciousness that proposes that conscious experience can be described as a five-dimensional manifold. As in Broad’s hypothesis, space-time can contain vectors of zero length between two points in space and time because of an imaginary time coordinate. A 3D volume of brain activity over a short period of time would have the time extended geometric form of a conscious observation in 5D. Green considers imaginary time to be incompatible with the modern physical description of the world, and proposes that the imaginary time coordinate is a property of the observer and unobserved things (things governed by quantum mechanics), whereas the real time of general relativity is a property of observed things.
These space-time theories of consciousness are highly speculative but have features that their proponents consider attractive: every individual would be unique because they are a space-time path rather than an instantaneous object (i.e., the theories are non-fungible), and also because consciousness is a material thing so direct supervenience would apply. The possibility that conscious experience occupies a short period of time (the specious present) would mean that it can include movements and short words; these would not seem to be possible in a presentist interpretation of experience.
Theories of this type are also suggested by cosmology. The Wheeler-De Witt equation describes the quantum wave function of the universe (or more correctly, the multiverse).
To me, the space-time theories of consciousness that lead to observation/consciousness from a point in the 4d/5d space-time continuum seem to mirror the identity formation function of stage 5.This I relate to evaluation /deliberation aspect of A-consciousness.
In theoretical physics, digital physics holds the basic premise that the entire history of our universe is computable in some sense. The hypothesis was pioneered in Konrad Zuse’s book Rechnender Raum (translated by MIT into English as Calculating Space, 1970), which focuses on cellular automata. Juergen Schmidhuber suggested that the universe could be a Turing machine, because there is a very short program that outputs all possible programmes in an asymptotically optimal way. Other proponents include Edward Fredkin, Stephen Wolfram, and Nobel laureate Gerard ‘t Hooft. They hold that the apparently probabilistic nature of quantum physics is not incompatible with the notion of computability. A quantum version of digital physics has recently been proposed by Seth Lloyd. None of these suggestions has been developed into a workable physical theory.
It can be argued that the use of continua in physics constitutes a possible argument against the simulation of a physical universe. Removing the real numbers and uncountable infinities from physics would counter some of the objections noted above, and at least make computer simulation a possibility. However, digital physics must overcome these objections. For instance, cellular automata would appear to be a poor model for the non-locality of quantum mechanics.
To me the simulation argument is one model of us and the world- i.e we are living in a dream state/ simulation/ digital world where everything is synthetic/ predictable and computable. The alternative view of world as real, analog, continuous world where everything is creative / unpredictable and non-computable. One can , and should have both the models in mind – a simulated reality that is the world and a simulator that is oneself. Jagat mithya, brahma sach. World (simulation) is false, Brahma (creation) is true . Ability to see the world as both a fiction and a reality at the same time, as a fore laid stage and as a creative jazz at the same time leads to this sixth stage of consciousness the A-consciousness of an emergent conscious self that is distinct from mere body/brain. One can see oneself and others as actors acting as per their roles on the world’s stage; or as agents co-creating the reality.
That should be enough for today, but I am sure my astute readers will take this a notch further and propose two more theoretical approaches to consciousness and perhaps look for their neural substrates basde on teh remianing tow stages and componenets of A-consciousness..
I’ll like to start with a quote from the Mundaka Upanishads:
Two birds, inseparable friends, cling to the same tree. One of them eats the sweet fruit, the other looks on without eating.
On the same tree man sits grieving, immersed, bewildered, by his own impotence. But when he sees the other lord contented and knows his glory, then his grief passes away.
Today I plan to delineate the major conscious processes in the brain, without bothering with their neural correlates or how they are related to unconscious processes that I have delineated earlier. Also I’ll be restricting the discussion mostly to the easy problem of Access or A- consciousness. leaving the hard problem of phenomenal or P-consciousness for later.
I’ll first like to quote a definition of consciousness form Baars:
The contents of consciousness include the immediate perceptual world; inner speech and visual imagery; the fleeting present and its fading traces in immediate memory; bodily feelings like pleasure, pain, and excitement; surges of feeling; autobiographical events when they are remembered; clear and immediate intentions, expectations and actions; explicit beliefs about oneself and the world; and concepts that are abstract but focal. In spite of decades of behaviouristic avoidance, few would quarrel with this list today.
Next I would like to list the subsystems identified by Charles T tart that are involved in consciousness:
- EXTEROCEPTION (sensing the external world)
- INTEROCEPTION (sensing the body)
- INPUT-PROCESSING (seeing meaningful stimuli)
- SPACE/TIME SENSE
- SENSE OF IDENTITY
- EVALUATION AND DECISION -MAKING
- MOTOR OUTPUT
With this background, let me delineate the major conscious processes/ systems that make up the A-consciousness as per me:-
- Perceptual system: Once the spotlight of attention is available, it can be used to bring into focus the unconscious input representations that the brain is creating. Thus a system may evolve that has access to information regarding the sensations that are being processed or in other words that perceives and is conscious of what is being sensed. To perceive is to have access to ones sensations. In Tarts model , it is the input-processing module that ‘sees’ meaningful stimuli and ignores the rest / hides them from second-order representation. This is Baars immediate perceptual world.
- Agency system: The spotlight of attention can also bring into foreground the unconscious urges that propel movement. This access to information regarding how and why we move gives rise to the emergence of A-consciousness of will/ volition/agency. To will is to have access to ones action-causes. In tarts model , it is the motor output module that enables sense of voluntary movement. In Baars definition it is clear and immediate intentions, expectations and actions.
- Memory system: The spotlight of attention may also bring into focus past learning. This access to information regarding past unconscious learning gives rise to A-consciousness of remembering/ recognizing. To remember is to have access to past learning. The Tart subsystem for the same is Memory and Baars definition is autobiographical events when they are remembered.
- Feeling (emotional/ mood) system: The spotlight of attention may also highlight the emotional state of the organism. An information about one’s own emotional state gives rise to the A-consciousness of feelings that have an emotional tone/ mood associated. To feel is to have access to ones emotional state. The emotions system of Tart and Baars bodily feelings like pleasure, pain, and excitement; surges of feeling relate to this.
- Deliberation/ reasoning/thought system: The spotlight of attention may also highlight the decisional and evaluative unconscious processes that the organism indulges in. An information about which values guided decision can lead to a reasoning module that justifies the decisions and an A-consciousness of introspection. To think is to have access to ones own deliberation and evaluative process. Tarts evaluative and decision making module is for the same. Baars definition may be enhanced to include intorspection i.e access to thoughts and thinking (remember Descartes dictum of I think therefore I am. ) as part of consciousness.
- Modeling system that can differentiate and perceive dualism: The spotlight of attention may highlight the dual properties of the world (deterministic and chaotic ). An information regarding the fact that two contradictory models of the world can both be true at the same time, leads to modeling of oneslf that is different from the world giving rise to the difference between ‘this’ and ‘that’ and giving rise to the sense of self. One models both the self and the world based on principles/ subsystems of extereocpetion and interoception and this give rise to A-consciousness of beliefs about the self and the world. To believe is to have access to one’s model of something. One has access to a self/ subjectivity different from world and defined by interoceptive senses ; and a world/ reality different from self defined by exterioceptive senses. The interocpetive and exteroceptive subsystems of Tart and Baars explicit beliefs about oneself and the world are relevant here. This system give rise to the concept of a subjective person or self.
- Language system that can report on subjective contents and propositions. The spotlight of awareness may verbalize the unconscious communicative intents and propositions giving rise to access to inner speech and enabling overt language and reporting capabilities. To verbally report is to have access to the underlying narrative that one wants to communicate and that one is creating/confabulating. This narrative and story-telling capability should also in my view lead to the A-consciousness of the stream of consciousness. This would be implemented most probably by Tart’s unconscious and space/time sense modules and relates to Baars the fleeting present and its fading traces in immediate memory- a sense of an ongoing stream of consciousness. To have a stream of consciousness is to have access to one’s inner narrative.
- Awareness system that can bring into focal awareness the different conscious process that are seen as coherent. : the spotlight of attention can also be turned upon itself- an information about what all processes make a coherent whole and are thus being attended and amplified gives rise to a sense of self-identity that is stable across time and unified in space. To be aware is to have access to what one is attending or focusing on or is ‘conscious’ of. Tarts Sense of identity subsystem and Baars concepts that are abstract but focal relate to this. Once available the spotlight of awareness opens the floodgates of phenomenal or P-consciousness or experience in the here-and-now of qualia that are invariant and experiential in nature. That ‘feeling of what it means to be’ of course is the subject matter for another day and another post!
This article continues my series on major conscious and unconscious processes in the brain. In my last two posts I have talked about 8 major unconscious processes in the brain viz sensory, motor, learning , affective, cognitive (deliberative), modelling, communications and attentive systems. Today, I will not talk about brain in particular, but will approach the problem from a slightly different problem domain- that of modelling/implementing an artificial brain/ mind.
I am a computer scientist, so am vaguely aware of the varied approaches used to model/implement the brain. Many of these use computers , though not every approach assumes that the brain is a computer.
Before continuing I would briefly like to digress and link to one of my earlier posts regarding the different traditions of psychological research in personality and how I think they fit an evolutionary stage model . That may serve as a background to the type of sweeping analysis and genralisation that I am going to do. To be fair it is also important to recall an Indian parable of how when asked to describe an elephant by a few blind man each described what he could lay his hands on and thus provided a partial and incorrect picture of the elephant. Some one who grabbed the tail, described it as snake-like and so forth.
With that in mind let us look at the major approaches to modelling/mplementing the brain/intelligence/mind. Also remember that I am most interested in unconscious brain processes till now and sincerely believe that all the unconscious processes can, and will be successfully implemented in machines. I do not believe machines will become sentient (at least any time soon), but that question is for another day.
So, with due thanks to @wildcat2030, I came across this book today and could immediately see how the different major approaches to artificial robot brains are heavily influenced (and follow) the evolutionary first five stages and the first five unconscious processes in the brain.
The book in question is ‘Robot Brains: Circuits and Systems for Conscious Machines’ by Pentti O. Haikonen and although he is most interested in conscious machines I will restrict myself to intelligent but unconscious machines/robots.
The first chapter of the book (which has made to my reading list) is available at Wiley site in its entirety and I quote extensively from there:
Presently there are five main approaches to the modelling of cognition that could be used for the development of cognitive machines: the computational approach (artificial intelligence, AI), the artificial neural networks approach, the dynamical systems approach, the quantum approach and the cognitive approach. Neurobiological approaches exist, but these may be better suited for the eventual explanation of the workings of the biological brain.
The computational approach (also known as artificial intelligence, AI) towards thinking machines was initially worded by Turing (1950). A machine would be thinking if the results of the computation were indistinguishable from the results of human thinking. Later on Newell and Simon (1976) presented their Physical Symbol System Hypothesis, which maintained that general intelligent action can be achieved by a physical symbol system and that this system has all the necessary and sufficient means for this purpose. A physical symbol system was here the computer that operates with symbols (binary words) and attached rules that stipulate which symbols are to follow others. Newell and Simon believed that the computer would be able to reproduce human-like general intelligence, a feat that still remains to be seen. However, they realized that this hypothesis was only an empirical generalization and not a theorem that could be formally proven. Very little in the way of empirical proof for this hypothesis exists even today and in the 1970s the situation was not better. Therefore Newell and Simon pretended to see other kinds of proof that were in those days readily available. They proposed that the principal body of evidence for the symbol system hypothesis was negative evidence, namely the absence of specific competing hypotheses; how else could intelligent activity be accomplished by man or machine? However, the absence of evidence is by no means any evidence of absence. This kind of ‘proof by ignorance’ is too often available in large quantities, yet it is not a logically valid argument. Nevertheless, this issue has not yet been formally settled in one way or another. Today’s positive evidence is that it is possible to create world-class chess-playing programs and these can be called ‘artificial intelligence’. The negative evidence is that it appears to be next to impossible to create real general intelligence via preprogrammed commands and computations.
The original computational approach can be criticized for the lack of a cognitive foundation. Some recent approaches have tried to remedy this and consider systems that integrate the processes of perception, reaction, deliberation and reasoning (Franklin, 1995, 2003; Sloman, 2000). There is another argument against the computational view of the brain. It is known that the human brain is slow, yet it is possible to learn to play tennis and other activities that require instant responses. Computations take time. Tennis playing and the like would call for the fastest computers in existence. How could the slow brain manage this if it were to execute computations?
The artificial neural networks approach, also known as connectionism, had its beginnings in the early 1940s when McCulloch and Pitts (1943) proposed that the brain cells, neurons, could be modelled by a simple electronic circuit. This circuit would receive a number of signals, multiply their intensities by the so-called synaptic weight values and sum these modified values together. The circuit would give an output signal if the sum value exceeded a given threshold. It was realized that these artificial neurons could learn and execute basic logic operations if their synaptic weight values were adjusted properly. If these artificial neurons were realized as hardware circuits then no programs would be necessary and biologically plausible artificial replicas of the brain might be possible. Also, neural networks operate in parallel, doing many things simultaneously. Thus the overall operational speed could be fast even if the individual neurons were slow. However, problems with artificial neural learning led to complicated statistical learning algorithms, ones that could best be implemented as computer programs. Many of today’s artificial neural networks are statistical pattern recognition and classification circuits. Therefore they are rather removed from their original biologically inspired idea. Cognition is not mere classification and the human brain is hardly a computer that executes complicated synaptic weight-adjusting algorithms.
The human brain has some 10 to the power of 11 neurons and each neuron may have tens of thousands of synaptic inputs and input weights. Many artificial neural networks learn by tweaking the synaptic weight values against each other when thousands of training examples are presented. Where in the brain would reside the computing process that would execute synaptic weight adjusting algorithms? Where would these algorithms have come from? The evolutionary feasibility of these kinds of algorithms can be seriously doubted. Complicated algorithms do not evolve via trial and error either. Moreover, humans are able to learn with a few examples only, instead of having training sessions with thousands or hundreds of thousands of examples. It is obvious that the mainstream neural networks approach is not a very plausible candidate for machine cognition although the human brain is a neural network.
Dynamical systems were proposed as a model for cognition by Ashby (1952) already in the 1950s and have been developed further by contemporary researchers (for example Thelen and Smith, 1994; Gelder, 1998, 1999; Port, 2000; Wallace, 2005). According to this approach the brain is considered as a complex system with dynamical interactions with its environment. Gelder and Port (1995) define a dynamical system as a set of quantitative variables, which change simultaneously and interdependently over quantitative time in accordance with some set of equations. Obviously the brain is indeed a large system of neuron activity variables that change over time. Accordingly the brain can be modelled as a dynamical system if the neuron activity can be quantified and if a suitable set of, say, differential equations can be formulated. The dynamical hypothesis sees the brain as comparable to analog feedback control systems with continuous parameter values. No inner representations are assumed or even accepted. However, the dynamical systems approach seems to have problems in explaining phenomena like ‘inner speech’. A would-be designer of an artificial brain would find it difficult to see what kind of system dynamics would be necessary for a specific linguistically expressed thought. The dynamical systems approach has been criticized, for instance by Eliasmith (1996, 1997), who argues that the low dimensional systems of differential equations, which must rely on collective parameters, do not model cognition easily and the dynamicists have a difficult time keeping arbitrariness from permeating their models. Eliasmith laments that there seems to be no clear ways of justifying parameter settings, choosing equations, interpreting data or creating system boundaries. Furthermore, the collective parameter models make the interpretation of the dynamic system’s behaviour difficult, as it is not easy to see or determine the meaning of any particular parameter in the model. Obviously these issues would translate into engineering problems for a designer of dynamical systems.
The quantum approach maintains that the brain is ultimately governed by quantum processes, which execute nonalgorithmic computations or act as a mediator between the brain and an assumed more-or-less immaterial ‘self’ or even ‘conscious energy field’ (for example Herbert, 1993; Hameroff, 1994; Penrose, 1989; Eccles, 1994). The quantum approach is supposed to solve problems like the apparently nonalgorithmic nature of thought, free will, the coherence of conscious experience, telepathy, telekinesis, the immortality of the soul and others. From an engineering point of view even the most practical propositions of the quantum approach are presently highly impractical in terms of actual implementation. Then there are some proposals that are hardly distinguishable from wishful fabrications of fairy tales. Here the quantum approach is not pursued.
The cognitive approach maintains that conscious machines can be built because one example already exists, namely the human brain. Therefore a cognitive machine should emulate the cognitive processes of the brain and mind, instead of merely trying to reproduce the results of the thinking processes. Accordingly the results of neurosciences and cognitive psychology should be evaluated and implemented in the design if deemed essential. However, this approach does not necessarily involve the simulation or emulation of the biological neuron as such, instead, what is to be produced is the abstracted information processing function of the neuron.
A cognitive machine would be an embodied physical entity that would interact with the environment. Cognitive robots would be obvious applications of machine cognition and there have been some early attempts towards that direction. Holland seeks to provide robots with some kind of consciousness via internal models (Holland and Goodman, 2003; Holland, 2004). Kawamura has been developing a cognitive robot with a sense of self (Kawamura, 2005; Kawamura et al., 2005). There are also others. Grand presents an experimentalist’s approach towards cognitive robots in his book (Grand, 2003).
A cognitive machine would be a complete system with processes like perception, attention, inner speech, imagination, emotions as well as pain and pleasure. Various technical approaches can be envisioned, namely indirect ones with programs, hybrid systems that combine programs and neural networks, and direct ones that are based on dedicated neural cognitive architectures. The operation of these dedicated neural cognitive architectures would combine neural, symbolic and dynamic elements.
However, the neural elements here would not be those of the traditional neural networks; no statistical learning with thousands of examples would be implied, no backpropagation or other weight-adjusting algorithms are used. Instead the networks would be associative in a way that allows the symbolic use of the neural signal arrays (vectors). The ‘symbolic’ here does not refer to the meaning-free symbol manipulation system of AI; instead it refers to the human way of using symbols with meanings. It is assumed that these cognitive machines would eventually be conscious, or at least they would reproduce most of the folk psychology hallmarks of consciousness (Haikonen, 2003a, 2005a). The engineering aspects of the direct cognitive approach are pursued in this book.
Now to me these computational approaches are all unidimensional-
- The computational approach is suited for symbol-manipulation and information-represntation and might give good results when used in systems that have mostly ‘sensory’ features like forming a mental represntation of external world, a chess game etc. Here something (stimuli from world) is represented as something else (an internal symbolic represntation).
- The Dynamical Systems approach is guided by interactions with the environment and the principles of feedback control systems and also is prone to ‘arbitrariness’ or ‘randomness’. It is perfectly suited to implement the ‘motor system‘ of brain as one of the common features is apparent unpredictability (volition) despite being deterministic (chaos theory) .
- The Neural networks or connectionsim is well suited for implementing the ‘learning system’ of the brain and we can very well see that the best neural network based systems are those that can categorize and classify things just like ‘the learning system’ of the brain does.
- The quantum approach to brain, I haven’t studied enough to comment on, but the action-tendencies of ‘affective system’ seem all too similar to the superimposed,simultaneous states that exits in a wave function before it is collapsed. Being in an affective state just means having a set of many possible related and relevant actions simultaneously activated and then perhaps one of that decided upon somehow and actualized. I’m sure that if we could ever model emotion in machine sit would have to use quantum principles of wave functions, entanglemnets etc.
- The cognitive approach, again I haven’t go a hang of yet, but it seems that the proposal is to build some design into the machine that is based on actual brain and mind implemntations. Embodiment seems important and so does emulating the information processing functions of neurons. I would stick my neck out and predict that whatever this cognitive approach is it should be best able to model the reasoning and evaluative and decision-making functions of the brain. I am reminded of the computational modelling methods, used to functionally decompose a cognitive process, and are used in cognitive science (whether symbolic or subsymbolic modelling) which again aid in decision making / reasoning (see wikipedia entry)
Overall, I would say there is room for further improvement in the way we build more intelligent machines. They could be made such that they have two models of world – one deterministic , another chaotic and use the two models simulatenously (sixth stage of modelling); then they could communicate with other machines and thus learn language (some simulation methods for language abilities do involve agents communicating with each other using arbitrary tokens and later a language developing) (seventh stage) and then they could be implemented such that they have a spotlight of attention (eighth stage) whereby some coherent systems are amplified and others suppressed. Of course all this is easier said than done, we will need at least three more major approaches to modelling and implementing brain/intelligence before we can model every major unconscious process in the brain. To model consciousness and program sentience is an uphill task from there and would definitely require a leap in our understandings/ capabilities.
Do tell me if you find the above reasonable and do believe that these major approaches to artificial brain implementation are guided and constrained by the major unconscious processes in the brain and that we can learn much about brain from the study of these artificial approaches and vice versa.