slowthai & The Locus of Control

It’s a bright cold day in April, and you’d be forgiven for not knowing what time the clocks are striking. As social isolation digs further into one’s sense of meaning in the world, opportunities of escapism prove fairly useful, and I’ve founda good one.

As I only learned recently, in addition to being the most notable scientific journal in existence, Nature have a side hustle of publishing short fiction under a branch named Futures. And it’s class. While this particular future is one that they failed to predict, the column generally presents a prescient selection of science fiction that offers the behavioural sciences a preview of questions that they may have to ask eventually, and one featured story in particular sticks out to me right now.

‘What’s Expected of Us?’ by Ted Chiang offers a brief description of a reality where a device that (debatably) demonstrates a lack of free will is commercialized. Quickly, it spreads existential dread and meaninglessness amongst the population, a bit like a virus. It’s not really a complex point. When the sense, or locus, of control over one’s life evaporates, it become a lot more difficult to enjoy.

I can’t help but feel that ‘What’s expected of us?’ is a suitable question for behavioural policy units to be asking themselves at this point. After the term ‘herd immunity’ sparked a public relations disaster for the UK’s Behavioural Insights Team around a month ago, it appears as if a public that was previously unaware of behaviourally informed policy affecting their lives is now rather cynical about the attempts to do so.

Notwithstanding the fact that it seems the original behavioural covid response wasn’t overly evidence-based, it certainly felt that the backlash was amplified by the increased salience of there being a behavioural unit in existence in the first place.

Online outcry about the role of behavioural science in the UK government was typified by accusations that the discipline was no more than a variation of ‘digital marketing’ rather than science, as well as classic comparisons to Big Brother or Anthony Burgess’ A Clockwork Orange. Obviously, these examples are sensationalist, but the sentiment is worth paying attention to. Behavioural Science arrives with the philosophy of helping people behave in positive ways that they would otherwise fall short of, but when this begins to interfere with one’s sense of autonomy, is this ambition a little misguided?

At the very least, on occasions such as this where behavioural resources are used for topics that go beyond issues like simple financial decisions, there is definitely a hint of ‘who watches the watchmen?’.

It’s not surprising from a psychological perspective that people are averse to other agents having an input into how their lives turn out, even when it’s ‘for the better’. On the back of decades of work spearheaded by Edward Deci & Richard Ryan (2017), Self-determination Theory identifies the central basic needs that underlie general mental wellbeing, one of which is autonomy. It’s not just that we want to feel in charge of our own behaviour, we kind of need to.

A Clockwork Orange envisions a system in which the powers at be can condition wrongdoers into making less bad decisions in future, and the free will of the individual is traded-off for a more moral, happy society. If you’ve never read the book or seen the film, there’s a chance that you may have at least seen an homage to it not once, but twice, in the excellent music videos of the Mercury-nominated slowthai.

The Northampton rapper’s Nothing Great About Britain last year became an iconic representation of the angst that exists between a government with a worrying amount of Oxbridge graduates and the sections of society that they don’t think about, so the visual metaphor used in Inglorious works perfectly. He doesn’t even leave it at visuals either, as heard in the track Dead Leaves:

Back here tomorrow

Clockwork Orange

I ain’t done a day of porridge

Don’t make me Chuck Norris

‘Cause I run my town but I’m nothin’ like Boris

Inglorious’ Orange Homage w/ added Boris

The album and the book share a message: if those in power try to force you one way, push back by being a sort of ‘true self’. What is truly weird is that real life is somehow less optimistic than Burgess’ novel. In the book, while the behavioural intervention is extreme, it’s not effective long term, and the story concludes with a place for free will remaining in life.

In a piece I recently came across that hasn’t become outdated with time, Newman (1991) also uses A Clockwork Orange as a lens through which to view behavioural measures in government:

“If free will does indeed exist, then destructive interventions attempted by behaviorists will be ineffective and they will therefore be discredited, this will do more to destroy the influence of behaviorists than any philosophical treatise. If the behavioral interventions are successful, however, then we had better take them seriously.”

This is the thing. They do work. They. Work. Pretty. Well.

Of course, when Burgess wrote his novel, he wasn’t thinking about these sorts of policies with measurable social impacts. But when he wrote (on a Kantian wavelength) that “when a man cannot choose, he ceases to be a man”, was he just being dramatic?

As a behavioural science student, I’m not sure what the answer to that is and that makes me somewhat uncomfortable. Behavioural interventions don’t restrict choice, but they do affect it when designed well. So, when you consider that alongside a climate where people are resistant to behavioural science units influencing them in the first place, the equilibrium position we are in seems a small bit fucked.

In Nothing Great About Britain, slowthai rebels against a government that doesn’t consider the interests of its people. When you note that SDT tells us that autonomy and a sense of control over one’s actions is key to wellbeing, perhaps behavioural science and its often-obfuscated motivations are in the firing line too.

However, if someone did argue against behavioural policies in an attempt to preserve the autonomy of citizens, therein lies an arguably trickier issue: free will is likely a cognitive illusion in the first place.

I began this post with reference to Chiang’s ‘What’s Expected of Us?’, and it’s a story not borne out of pure fantasy. Considering that models of behaviour propose that beliefs and actions are a result of genetic dispositions, personality traits and the environment you were thrown into, there is an enormous role for randomness, and no clear role for agency. Self-determination theory recognises an aversion to other agents having an input into how our lives turn out, but it could be even more difficult to deal with having no input ourselves.

Another, better, short story of Chiang’s is titled ‘Anxiety is the Dizziness of Freedom’, in reference to the existentialist Soren Kierkegaard (discussed in much better depth here). Kierkegaard proposed that the sense of discrete selfhood is illusory, so it is fitting that in this story Chiang examines a possible future where the many worlds interpretation of quantum mechanics is proven and the effect this would have on one’s self-conception:

“People found themselves thinking about the enormous role that contingency played in their lives. Some people experienced identity crises, feeling that their sense of self was undermined by the countless parallel versions of themselves.”

The fact that our lives could perceivably end up on countless paths, but that one has no agency over which path they end up on, has made for good fictional drama since Oedipus Rex. It’s seen in art also. In 1933, the surrealist Rene Magritte produced Elective Affinities, an illustration of how the will cannot be unbounded from external forces, using a pretty on-the-nose theoretical reference. It’s an entertaining idea to play with. However, it would probably not be as entertaining to be consciously aware of this at all times in your own life.

Elective Affinities, 1933 by Rene Magritte
Elective Affinities

Applying this back to human decision-making, neuroscience too has found reason to push back against notions of discrete agency. In work such as Fried et al’s (2011), measuring neuronal activity allows researchers to predict how a participant will act in a basic choice paradigm before the participant themselves even consciously decides. There’s no doubt that the experience of freedom is strong, in fact it’s very convincing. But it’s likely a bit of clever decoration that’s bolted onto the true drivers of choice that we have no conscious control over.  

One interpretation to this line of thinking would be that it’s thus impossible for behavioural science in government to undermine an individuals control over their own life, as it can’t exist, and even if they were hyper-conscious of factors influencing them they could never count them all. If that’s true then who cares if we’re influencing people that don’t want to be influenced? But that’s not really the point to care about, it’s the perception that’s more relevant.

I read a good line recently that said ‘it is agreeable sometimes to talk in primary colours even if you have to think in greys’. Given that most academic writing could be perceived as colourless in entirety, it’s not surprising that the best metaphor I’ve seen for this desire for the sense of illusory control comes from somewhere else. To borrow from 2019 hip-hop again, this time Billy Woods:

Life is just two quarters in the machine

But, either you got it or don’t that’s the thing

I was still hittin’ the buttons, “Game Over” on the screen

Dollar movie theater, dingy foyer, little kid, not a penny to my name

Fuckin’ with the joystick, pretendin’ I was really playin’

To revisit the core question of this post; in the decade to come, what are behavioural scientists for, and what’s expected of us? The last decade provides an evidence base that suggests that behavioural interventions can influence individual choice for a positive social impact, but what if it’s damaging to citizens to realise that their sense of agency/locus of control has been undermined? How do you calculate that trade-off?

One tactic is to avoid it altogether. This year, Lades & Delaney published the first easily comprehensible framework for ethical behavioural policy under the acronym FORGOOD. Having three O’s sort of spoils a good acronym, but that’s besides the point.

The first O stands for Openness and the R for Respect, defined as:

Openness – is the nudge open or hidden or even manipulative?

Respect – does the nudge respect people’s autonomy, dignity, freedom of choice and privacy?

These complement survey data that suggests members of the public are more likely to approve of nudges that operate under transparency and allude to the aversion people have to policy with obfuscated means and goals. Nothing surprising there.

 Building on this theme though, there also exists a suggestion that it makes more sense to simply move a lot of behavioural interventions away from classical libertarian paternalism, and into the self-nudge realm. Instead of having to ask who watches the watchmen, you could simply enable a population of citizen choice architects to watch over themselves and their own biases. In that sense, policymakers wouldn’t be reducing autonomy, they could actually enhance it.

What’s sort of fascinating about that problem is, to almost certainly overdo the point, policymakers aren’t dealing with citizens’ autonomy at all. It’s the perception of autonomy that’s important. The locus of control. And it’s an absurdity that many behavioural practitioners fail to respect, so it should be intriguing to see how that evolves over the decade to come.  

The public relations failure that was the BIT’s involvement in the UK’s coronavirus response signalled that there remains a lack of trust between the public and the discipline of behavioural science, and any further damages to this relationship run the risk of negating all the positive impacts that well-researched interventions can have in a modern society. In that sense it’s probably a good time to have such a wake-up call, while the falcon can still hear the falconer, and the buzz of Behavioural Science hasn’t yet been reduced to some sort of blunt hum.

But to truly earn the respect of the public, behavioural units could offer a little more respect too. People are probably quite averse to feeling that behavioural insights are impeding on their autonomy, even if it’s founded on an illusion. That’s not something that behavioural scientists seem to talk about all that much, and if they don’t start valuing it a tad more, it’s not clear what sort of beast slouches towards Whitehall to be born.

Fake MDMA, Political Polarisation & Social Nudges

Like many others, I’m pretty prone to ending up spending evenings in weird YouTube rabbit holes. It’s actually impressive that their algorithm is accurate to the point where it can predict that you’ll click on things which you hadn’t even previously thought about.

This past week, my meanderings have been coloured by old clips from magicians Penn & Teller and their mildly successful TV show ‘Fool Us’. The premise is that every episode, new magicians perform onstage in the hope that the aforementioned pair of veterans fail to work out how their tricks work.

It’s fairly shit, but I like it.

Strangely, the moments that I find most endearing are not the rarer occasions where Penn & Teller are left stumped, but rather when they catch how the performer tried to fool them. In these instances where the skill involved is revealed, you’re quickly reminded of something that psychologists have known for decades, but magicians have known for hundreds of years: people rely heavily on their perceptions, we think that what we see is all there is, and this can be played with. If a magician uses your preconceptions to imply that something is happening, you’re likely to follow along.

Coincidentally, this point was recently reinforced via my other main source of procrastination, twitter.

It can be slightly discomforting to think that our understanding of our own experiences can be somewhat illusory, but I’m not sure that it should be ignored.

Bringing it back to magicians, one common technique that emphasises one’s false perceptions of their actions is that of forced card choice. With some professional level sleight of hand and sense of timing, a magician can tell you to pick a card at random, while being in total control of exactly which one you’re going to end up with.

What’s interesting in this interaction from a behavioural science perspective, as previously illustrated by Dr. Gustav Kohn, is that you’ll feel like you’ve had a genuinely free choice, even though you’ve been heavily influenced. It’s a simple enough point: just because you’re certain in your beliefs, it doesn’t mean that they’re correct.

While not shining an overly positive light on human cognition, this can at least be amusing.

The more alarming aspect of this, however, is that these misperceptions aren’t limited to shitty TV shows or tripping on birth control pills. Our strongest beliefs about the world and our place in it are probably subject to bias and illusions of understanding.

In a previous post, I tried to illustrate how crafting a narrative for your own life and drawing a sense of certainty from your experiences can make for an easier existence. But it has its drawbacks too. Being able to internally accommodate for your own inconsistencies helps build a cohesive sense of self from your own perspective, but it can pretty easy for others to observe instances where things don’t quite add up.

A few days ago, an article posted by the Bristol Post identified local men who publicly threatened the soon-to-be visiting Greta Thunberg online, one of which was still brandishing a ‘Be Kind’ filter on his profile picture in the wake of Caroline Flack’s suicide. It’s obviously an extreme, and pathetic, example to use, but it raises an important question of how capacities such as empathy actually work. Are most of us consistent people, or does it just suit us to believe that?

Regarding morality in the social and political domain, for a long time the implicit assumption has been that moral standings direct political leanings. However, current research suggests that this relationship operates in the opposite direction. Working off Social Identity Theory, this suggests that once we’ve planted ourselves amongst a cohesive group (such as an ideology), how we feel about policies and social issues is largely directed by influence of the group. Even if we feel like our reasoning is grounded in pure free will.

Without trying to overdo a basic point, having that artificial belief in the depth of one’s convictions makes life easier to live, but it does not mean that that sense of conviction is justified.

Empirical work from Fernbach et al. (2013) highlighted the point that political extremity is often supported by an illusion of understanding. Participants were recruited from the US, the most obvious focus of polarisation this decade, and were asked for their views on many popular issues. They were happy to declare themselves firmly for or against certain policies, but once they were asked to explain exactly how these policies would work, their attitudes moved back toward the middle. It’s not really any different from thinking you know how the zipper on your jeans works, until someone actually asks you to explain it.

Using the famous work of Phil Tetlock, you could say that we’re ‘naïve realists’ that are ‘prisoners of our preconceptions’. It’s good for our sense of certainty and being part of a consistent ideology fulfils several basic needs, after all it as an adaptive evolutionary feature, but it’s probably bad for democracy.

Banksy’s Devolved Parliament, a good reminder of the role of evolution

Another excellent experiment showing that the desire for self-cohesion can overpower the desire for accuracy was carried out by Strandberg and colleagues. Participants were asked to indicate their attitudes regarding various political statements, before later being re-shown their answers and prompted to confirm that what they had submitted was correct. What they didn’t know was that their own responses were manipulated, and some of the opinions that they were confirming were the exact opposite of what they had indicated.

Around half of manipulated attitudes were accepted by participants as being their correct view, and it didn’t matter if they were heavily politically involved individuals. What’s even more bizarre is that in a follow up session one week later, those that had accepted manipulated feedback had actually begun to shift their attitudes in that direction. We’re good at being consistent people, even if we’re not in control of what attitude we’re supposed to be sticking to.

The immediate negative conclusion to draw here is that political attitudes are shallow, but perhaps an optimist would say at least they’re more flexible than we may have thought. The latter inference, however, is quite tough to justify when positioned alongside the current political landscape. Partisans disagree reliably and intensely and increasing polarisation between liberals and conservatives is supported by data. But how much of this divide in structured upon an illusion?

Increased polarisation as illustrated by Pew research data

In the US, ordinary Democrats and Republicans consistently overestimate the difference in attitudes between them and the median voter of the opposite party, and this is exacerbated further as one incorporates their party as part of their identity. This furthers the sort of depressing notion that belonging to an ideology is more about fulfilling social needs of belonging and closure than actually forming accurate representations of society. To that end, it’s not much of a surprise that politicians with higher uses of collective words such as ‘us’ and ‘we’ tend to get elected more often.

I don’t want the point of this to be critical of the average voter and it would be hasty to use this evidence as reason to push back against pure democracy. But those participating in politics may be better served with an increased awareness of the factors that unconsciously affect them as well as their rivals.

Unfortunately, it’s truly unclear whether it is feasible to motivate people towards an environment where attitudes are revisable upon dissonant evidence, especially when it is natural to want to feel that your group is correct, and the other group is unquestionably wrong.

The Old Testament (I went to a Catholic school) tells the story of the Tower of Babel, where the whole of society came together to build towards heaven. Their punishment came from God confusing their language abilities, so that they could no longer understand each other, and the Tower couldn’t be finished. The resulting ‘confusion of tongues’ has since been famously portrayed in the work of Gustave Dore (below).

Partisan divides remind me a bit of that story. At some level we have collective goals to achieve a society where we can all be safe and happy, but it’s not clear how to talk to each other about it.

It might feel like an initial step would be addressing the fact that holding a political ideology is probably about more than knowing who you want to vote for, and the fact that most polarised opinions aren’t grounded in as much as knowledge as their holders believe.

These are misperceptions after all, but people hold them for a reason.

A partisan voter might not be as informed a citizen as they believe they are, but they probably feel very purposeful. They probably find a lot of meaning in their actions and reap benefits in their health and happiness as a result.

The guy from the tweet earlier didn’t take any Class A drugs, but the fact that he believed he did was enough for him to (I assume) have a pretty decent night. The ‘Be Kind’ guy that wants people to throw milkshakes at Greta Thunberg is clearly a hypocrite, but the fact that he can rationalise that helps him get up in the morning.

The fact that we can form structure from incoherence is a fairly remarkable achievement of perception and cognition, it just probably shouldn’t decide who ends up running our countries.

So how does Behavioural Science as a field attempt to fall into this? It’s a shame to say that it hasn’t really tried to yet. An excellent article from Sander van der Linden (2018) has previously addressed this point well, acknowledging that behavioural science and ‘nudging’ tends to purposefully avoid societies urgent but complex dilemmas, opting more towards low-hanging fruit and quick fixes.

It feels like behavioural scientists are at risk of patting themselves on the back a bit too quickly in how the area has developed. Polarisation of societies facilitates the exacerbation of our largest issues, from a lack of effective climate action to increasing wealth inequalities.

Yet even though we have a pretty developed set of fields describing why people make the choices they do and form the attitudes they carry; we don’t really seem to be doing much about it.

I think that’s something worth addressing.

Cognitive Entropy, Behavioural Science & Joey Bada$$


My favourite album of 2020 is actually a mixtape, and it’s actually from 2012. To makes things more confusing, the 12-track tape I’m referring to is titled ‘1999’, but more importantly, it is now remembered as the improbably good debut of the then 17-year-old New York rapper Joey Bada$$. As a wannabe behavioural scientist, I’m used to trying to think about what motivates people, what makes them happy and, more generally, what it’s like to exist as a person in the world.

However, the formality of academic writing sometimes falls short of addressing these questions properly when viewed in comparison to raw self-expression, and ‘1999’ has been highlighting that to me quite a lot recently. The hook that tends to stick with me the most comes from the mixtape’s Nas-inspired final track ‘Third Eye Shit/Suspect’, and comprises this final set of words that the listener is left with at its conclusion:

“Suspect n***** don’t come outside

You might get your wig pushed back tonight

Said I deserve my respect

Brains don’t matter if your wig get split on some third eye shit”

It’s a fairly brutal reminder that life can get shaken from the foundation of certainty that we build upon it, and it’s a point that has transcended art, philosophy and psychology for centuries. In 2020, it lies at the heart of a nascent area of Behavioural Science: cognitive entropy. How you define entropy depends on your discipline, but generally speaking, entropy refers to the level of uncertainty regarding possible actions/outcomes in the environment.

High entropy situations are highly uncertain, and we don’t like that. In fact, a fundamental motive that aids our survival is a desire to minimize entropy at all times, employing strategies we need not be conscious of. In grand terms, the avoid uncertainty is to remain sane, or as Karl Figlio more elegantly puts it, having a sense of certainty allows one to ‘live decisively in an incomprehensible world’.

This desire for reduced uncertainty has evolved to help humans survive in new environments and deal with the threat of the unknown, using tools such as complying with social norms and following common social narratives as a way of guiding one’s behaviour and attitudes to life, without actually having to worry about how things should or shouldn’t be. By this I mean that we behave as most people behave, dress as most people dress, go to college, get a job, start a family and (without trying to sound overly pretentious) follow the quasi-script that society has written for us.

It’s a set of rules, it’s entropy reducing, it makes life simpler and overall, it’s probably a good thing.

Eventually, we start to build an implicit sense of what it is to be a legitimate person, and our sense of how people like us are supposed to behave helps offer a sense of purpose to daily life, as well as a sense of meaning. At the most basic level, it offers a sense of certainty to our experiences, a set of rules for us and the world to operate by, and it relates to our goal of reducing cognitive entropy in our environment. However, evidence suggests that all this may sum to what can be most accurately described as a sort of useful illusion, one that most people can live their whole lives without addressing.

While quite abstract, and somewhat weird, it’s important to behavioural scientists for two reasons. Firstly, the feeling of certainty can be turned into a consumer product. For example, people buy insurance contracts with negative expected value, something homo economicus probably wouldn’t do. A more realistic definition of ‘irrational’ shouldn’t denounce this behaviour as such though, when you consider the increased cognitive entropy one allows into their life when they consider the possibility, however small, of a burnt down house with nothing to pay for a new one. The consumer good on offer need not be so relevant or grand in scale. As long as an advertisement can imply that purchasing a product can boost one’s sense of cohesion and security in the world, its market value should go up and consumers should be relatively happier about themselves.

The second reason this is important to behavioural scientists is more interesting, and more complicated. In basic terms, when we are faced with high entropy situations, where our assumptions of the ‘rules of existence’ are shaken, the way we react is fascinating. Before social science research, this was seen in art.

Recognised most notably through the works of Salvador Dali and Rene Magritte, the surrealist art movement may have been the first to express how playing with normal objects and presenting them in an incongruous relation can arouse the senses. Magritte’s ‘Le Banquet’ takes advantage of one’s expectation of a typical forest sunset and flips it on its head by placing the setting sun starkly in front of the trees that should be shielding it, breaking the ‘rules’ that seemed so concrete previously. In literature, the works of Franz Kafka typically unfold in a fashion that lacks coherence and a sense of meaning, offering the reader a jarring sense of a world that lacks the consistency and certainty of daily life.

Clever research from Proulx & Heine (2009, for one example) has exposed participants to these varying forms of absurdist art, which were found as significantly increasing the ability to identify novel patterns and solve complex tasks after exposure. The implied relationship being that threatening one’s sense of how the world works motivates a pushback in desire to regain coherence and order, and almost appears to act as a stimulus for more effective task completion. It is simply an illustration of the first principle of cognitive entropy, our need to minimise it, which we appear to be quite efficient at.

While this first study, despite being interesting, does not appear immediately consequential, further experimental designs are slightly more troubling. It appears that uncertainty not only motivates simple task performance, it also influences our grander sense of morals. For example, one study removes the uncertainty-related prime of Kafka/Magritte and replaces it with the presentation of incongruous word pairings, such as ‘Turn Frog’ and ‘Careful-Sweater’, before asking the participant to set a bail on a prostitute, who had been arrested in a hypothetical scenario.

These two items initially seem to have absolutely nothing to do with each other, but the participants that were primed with incoherent word pairings, versus common word pairings, tended to set bail twice as high, without being consciously aware of the effect that had just taken place. When the rules for how language work are broken, participants appear to more strongly affirm the rules of moral behaviour to compensate for the resulting discomfort.

The lesson here is that our senses of reason and meaning in the world, and our conclusions of how people ought to behave, are probably shallower than we expect. The question that looms is what happens when we realise this? Behavioural scientists in firms can use the insights of the first study to design interventions that increase employee entropy and, as a result, performance, but is there a limit to how far this can be pushed?

Returning to the notion of the artificially constructed rules of society that can govern our behaviour, attitudes and ambitions, we can relate this to Soren Kierkegaard’s notion of ‘automatic men’ living in comfortable ignorance. Again, this is a pretty good thing evolutionary speaking, but it’s not always sustainable. As Alan Watkins explains in an excellent public talk, following the rules doesn’t always deliver. In mid-life, one may be a good citizen with a good job and a good house yet conclude that “I’m now supposed to be happy and blissful forever, and I’m not”.

This is a basic midlife existential crisis, possibly best illustrated by the Coen Brothers’ 2009 existentialist endeavour ‘A Serious Man’. Kierkegaard would refer to this moment as the dread that results from falling into self-consciousness. Camus would build his philosophy of the absurd around it. Joey Bada$$ would hark back to it in the recurring line: ‘Brains don’t matter if your wig get split on some third eye shit’.

Still from A Serious Man’s uncertainty principle scene. “We can’t ever really know what’s going on”.

An added nuance of strangeness is that this is not necessarily a bad thing. While crippling at the moment of crisis, forcing one to realise that their assumptions of what makes for a good, happy life aren’t wholly accurate forces them to recalibrate their ideas. On a small scale, we saw experiment participants learn to solve a lab task more effectively, but scaled up to life-size threats of meaning, people may learn to maximise their own well-being more effectively.

As backed up by empirical research (see Poulin & Silver, 2019), significant negative life events may affect worldviews, and thus behaviour and well-being. Perhaps suffering a crisis in life is actually rather fortunate in the long-term, which could be comforting. If we can design a paradigm that elicits this effect without having to put someone through an actual traumatic event, then even better.

What is concerning to me is how Behavioural Science in its current form fits into this equation. Having a thorough understanding of human motivation is immeasurably useful and can make for great welfare-enhancing policy, but as was repeatedly noted last week in an article titled ‘Imagining the Next Decade of Behavioral Science’, public policy is not the main arena we operate in anymore. “Never before has the essence of the field been so squarely in the wheelhouse of corporate interests” remarks Philip Goff in the piece that combines insights from some of the field’s most influential figures.

As I mentioned at the beginning of this post, cognitive entropy and its relation to uncertainty in life is only a nascent feature of behavioural science that is likely to continually grow for decades to come. But who is going to influence what form it is going to grow into? People’s sense of certainty and security in an increasingly incoherent, information packed world is a powerful driver of behaviour and can be harnessed for behaviour change. But if it is to be large-scale private corporations that want that change to be more efficient workers, more persuasive advertising and more sales, then I’m not excited to see it.

As Todd Haugh articulates in the aforementioned article, “When we focus only on the is, and not the ought, we miss the deeper understanding of humanity that behavioural science invites”. To take inspiration from hip-hop once more, this time from Mos Def, “it’s a numbers game but shit don’t add up somehow”.