Two Kinds of Thinking • Desires and Identity • Brain Algorithms • Propagation Conflicts • Works Referenced
There’s a widespread myth that some people are left-brained, and some people are right-brained.
Supposedly, left-brained people are quantitative and analytical. They pay attention to details. They use logical thinking to solve problems in depth. Everything they do is carefully deliberated.
Right-brained people are qualitative and rely on intuition. They can see the big picture of things, and are able to go with their ‘gut’ when making decisions. Everything they do comes automatically.
A few seconds of thought is enough to realise that this framework is an oversimplification. Everybody uses both kinds of thinking. Nothing about this particular dichotomy seems important enough for it to divide either humankind or human brains into the two neat halves it suggests.
However, the folk wisdom does contain a deeper insight: thinking comes in two distinct flavours.
Two Kinds of Thinking
Humans avoiding familiar dangers like prowling lions or speeding trucks don’t use a lot of conscious reasoning. More often, high-stakes situations provoke an emotional response, providing our conscious minds with an executive summary of the situation and prompting immediate action.
Analytical thinking might seem pretty useless in the wilderness, but anybody learning to drive a car, ride a bicycle, or throw the latest kind of spear will need to devote conscious and deliberate effort before these tasks can be outsourced to the emotional mind. Even then, difficult edge cases will need to be analysed separately as they arise.
Learning things is important, but also incredibly tiring. Conscious, analytical thinking is slow and expensive. In fact, most non-human animals don’t seem to do it. Dog training doesn’t require conscious thinking on the part of the dog, but it does require a lot more time than training a human. Crows do seem to be adept at logic puzzles involving tool use, but the power of deliberate human cognition remains unparalleled. We’re a strange breed of rational-animal cyborgs.
Daniel Kahneman’s Thinking, Fast and Slow condenses decades of psychological research into the thesis that automatic and deliberate thinking are distinct neural processes that can often draw different conclusions from identical data. The fast, automatic brain is system one (S1), and the slow, analytical brain is system two (S2).
The book describes various heuristics used by the brain to simplify decision-making in complex situations, and shows where these approximations can fail. But it fails to answer another question about the two systems: why is only one of them a conscious process?
Desires and Identity
Sometimes our decisions don’t quite feel like our own. Addiction, akrasia and other unwanted desires can be hard to reconcile with our conscious identity, and certainly aren’t the result of an objective cost-benefit calculation. S1 urges us to actions that the conscious S2 disapproves of but feels powerless to prevent. It is the rare acts of willpower or strategic planning that allows S2 to defeat the various errors of S1 and make us feel like we are in control of our actions.
Four decades before Kahneman’s book, philosopher Harry Frankfurt argued that free will depends on our ability to resolve these conflicting tendencies of the brain. Freedom of the Will and the Concept of a Person suggests that personhood should be characterised not by consciousness, but by one’s freedom to alter and control the urges one knows to be bad.
Frankfurt introduces the concept of first-order desires: desires to perform a particular action, like drinking coffee or making your bed. These desires come in various intensities, and only some are effective in the sense of motivating action. One’s will is simply the collection of effective desires.
Free will requires a control over one’s effective motivations. Such control itself involves a volitional process: motivations are changed according to some idea about what they should be changed to. These second-order desires tend to come from S2, which synthesises the various S1 intuitions about behaviour with analytical introspection into a decision about what first-order desires are desirable. Effective second-order desires are what characterises willpower, in Frankfurt’s model.
Brain Algorithms
S1 can be thought of as classifying situations into various emotional categories, and executing reactive algorithms in accordance with this classification.
When you wake up, S1 classifies this situation as tired and marches you downstairs to turn on a kettle and prepare a cup of coffee. S2 might run a more analytical algorithm to provide you with the relevant associations coffee has caffeine, caffeine is a stimulant, stimulant use is unhealthy, which ultimately translate into S1 language as coffee is bad. This might set off a variety of emotional judgements like you should feel guilty about the coffee and might prompt S2 to relearn this routine by rewriting the emotional classifiers or the algorithmic response.
First-order desires are triggered by a primarily emotional model of our situation. S1 generates and resolves conflicting desires so we can choose a course of action. This process contains implicit judgements about the best response to a situation. Kahneman’s work on prospect theory shows how the utility function implied by S1 judgements differs from economically rational behaviour.
Our conscious identity is more closely aligned with S2’s introspective evaluation of S1’s utility algorithm. Deciding to propagate a second-order desire into S1 to produce the corresponding first-order desire involves an implicit utility function over the space of utility functions: a decision about what is valuable to value.
Propagation Conflicts
Sometimes S1 might resist attempts to propagate these desires down from S2, and we need to use tools like psychoanalysis, psychotropics, or just introspection to push the new decision-making algorithm into S1. This needs to be done gradually, or the algorithm will end up being too complicated and abstract for S1 to actually execute.
There might be good reasons for resisting these changes which our analytical brains have just missed. Consider how many sophisticated plans fail completely because of some unforeseen factor, where intuition and improvisation would have worked better to evaluate and navigate the complex situation.
Changing our values should be costly for S2, since it’s not designed to see the ‘big picture’ view of things, and it might end up pulling down Chesterton’s fence. You might not see an obvious logical reason why you need to get sleep every night, but S1 will still tell you when you’re tired.
S1 and S2 should be in constant healthy conflict. Conscious You needs to negotiate with your animal brain, and actually listening to it is an important part of that process.
The propagation process is itself an S1-S2 algorithm. An authoritarian environment might kill willpower and replace it with obedience to emotion. On the other hand, you might be able to replace that algorithm with something more accepting of S2 influence and bootstrap yourself into somebody with potentially unlimited willpower.
Just remember that some parts of the acceptance criteria for new algorithms can’t be changed, and some parts shouldn’t.
Works Referenced
Harry G. Frankfurt, 1971, Freedom of the Will and the Concept of a Person, The Journal of Philosophy
Daniel Kahneman, 2011, Thinking, Fast and Slow
D. Kahneman & A. Tversky, 1979, Prospect Theory: An Analysis of Decision Under Risk, Econometrica
Shane Parrish, Chesterton’s Fence: A Lesson in Second Order Thinking, Farnam Street
Robert H. Schmerling, 2017, Right Brain/Left Brain, Right?, Harvard Health Publishing
Discover more from quantia
Subscribe to get the latest posts sent to your email.