AI, Conscience, and a History of Decision-making Technologies

Simon King
20 min readJan 19, 2024
Perhaps the most ‘human’ robots in film history — the Drones from Silent Running. But if we’re going to take advice from AI instead of just telling it what to do, can we teach it what it means to be human?

Scare stories about AI developing something akin to consciousness are common enough. Despite not having an agreed upon, working definition of consciousness. Alongside this, there are discussions about whether AI can reliably embrace concepts like morality. AI can make decisions based on facts, but can it make decisions based on conscience? Does it have to? As long as humans are around, and we don’t entirely delegate our responsibilities to AI, is there space for coexistence and mutual learning? What does history teach us about how to outsource decision making whilst retaining values and conscience? Here are some thoughts prompted by two new books that explore AI in big picture terms.

Decision-making Tools — Companies, Co-Pilots and Conscience

In his book The Coming Wave, Mustafa Suleyman asserts that “…technology emerges to fill human needs. If people have powerful reasons to build and use it, it will get built and used. Yet in most discussions of technology people still get stuck on what it is, forgetting why it was created in the first place.” This is true, but in this statement, is the DeepMind co-founder perhaps revealing a frustration that inventors have shared for millennia? That is, that once launched on the world, what control an inventor once had over their invention’s application all but vanishes. From the flint axe to social media, what started out with one purpose soon found many more purposes — positive and negative — once it was in the hands of the masses. Masses whose goals, experiences and thought processes were more diverse than the inventor could have imagined.

In addition, Suleyman draws a line of influence running through the history of technology. The first wave was designed to shape the physical world. The second, to shape the information world. The upcoming third wave is to change the biological world, primarily through AI. Additionally, each wave has brought its own unintended and unforeseen dangers. Aware of this, he appeals to everyone not to ignore the potential damage that could come with an AI-driven manipulation of biological material. Given the likely inadequacy (for multiple reasons) of regulation, Suleyman advocates for ‘containment’ — a strategic, ongoing combination of regulation, monitoring, education, safety checks and commercial transformation ranging from corporate ownership to accountability.

Academic and political podcaster David Runciman also likes a pleasing rule of three. In his book The Handover he explores the parallels between AI and the two other great technologies of humanity — the state and the corporation. The state, and its sibling technology the constitution, has evolved to (attempt to) ensure security for geographically connected humans. The corporation has developed to realise and spread prosperity. It might not be too much of a stretch to suggest this analysis fits with Suleyman’s ideas of the physical (a state with boundaries ensuring physical safety) and the informational (a corporation creating and distributing products and ideas). They are both problem-solving technologies. The state provides a stable, predictable administration freed from the constant uncertainty of who will succeed the monarch when they die or are ousted. The company provides an on-going source of innovation and wealth by collective rather than individual means.

Both state and company last longer than individuals, and in principle they reflect the will and abilities of the many rather than the few. That longevity has also seen dramatic changes to a point where they pay little resemblance to their origins. Despite that evolution, state and company both reflect their citizens, employees, customers and users. They are, however, more than just an aggregation of the masses. Their leaders exercise a significant influence. They have their own value systems. They are specialised — very good at some types of decision-making or process, very poor at others. They serve the important role of saving those people from the challenges of complex decision-making, and from a historical perspective, they have been largely successful.

Runciman suggests that AI is the third great concept in decision outsourcing. As ever, this is a double-edged sword. Just as with state and corporation, it frees people and introduces a ‘wisdom of crowds’ but can easily tip over into control and disenfranchisement. Just as people argue that the governments and businesses that run the world are supposed to work for us, but instead we’ve ended up working for them. The reality is more subtle and knotty, and so might be the case with AI. Especially because unlike the state and the corporation, AI is, or will be, more general purpose. A technology in search of a problem. Or at best, it’s a potential solution to a whole raft of problems, or aspects of problems, including those of state and commerce.

Both commentators make worthwhile contributions to the messy, well-publicised (and increasingly culture war-ish) ‘AI debate’. They sensibly try to step around the doomer versus boomer (or more radically, the decels v e/accs) trap by looking at the positive and negative. They put the catastrophising in context and emphasise hope through engagement and understanding. If we all embrace the idea that we have a role to play — governments, companies, and importantly voters/consumers — then we can all help avoid the most negative impacts of AI.

If, as Suleyman believes, we need to focus on why a technology is invented, rather than what it is, then we should perhaps focus on how AI will both alleviate us from decision-making, whilst also making better decisions. In his exploration of the potential AI positives, he touches on one application of particular interest. He outlines a narrative that we are currently experiencing an AI transition — from a fairly blunt tool of classification and identification (indexing the world), through the current generative phase (making new things based on that indexing), before ultimately emerging into an interactive stage.

Taking an optimistic perspective, that means we’re going through the painful adolescence of AI. None of us are who we were when we were 15-years-old. That suggests that fears over AI being forever biased, naïve and misanthropic, whilst real, may also be overstated. An uncomfortable, occasionally dangerous blip in an otherwise positive, progressive trend. The interactive AI will eventually control, coordinate, and cooperate with other systems, as well as people. Importantly, according to Suleyman, it will not act in unethical ways. It will be respectful and challenging. Rather than echoing objectionable ideas and finding half-truths to support them, it will ask why someone would hold those opinions or ideas. It will listen and understand, consider, and offer a tailored counter narrative.

Suleyman’s current project, Pi (Personal Intelligence), envisages a positive application where AI is a bespoke coach, tutor, diary secretary, interlocutor and mentor. A personal and personalised assistant, or in his words, your own ‘Chief of Staff’ (a title rarely used and little understood outside tech and politics). It perhaps belies Suleyman’s (brief) time studying philosophy and theology. It appears that Pi (or his vision for it) might be based on the Socratic method of dialogue and understanding — continuously asking the right questions until an ultimate truth is revealed. An exciting prospect, but do we need to look beyond what the inventor intends and how the wider world might change their invention?

This AI assistant, or co-pilot, could support us in achieving our goals. According to Suleyman, it has the potential to impact many aspects of our lives. It could help farmers farm more efficiently. Low-income households make better financial decisions. It could encourage us to make healthier choices. It can help students of all ages learn in the most effective way for them, as well as supporting their mental health and ambitions. It could lead to a more equal, productive society of abundance and empathy. So what happens when the advice we need strays from the purely practical? Or overlaps? When the choice we make between products to buy or jobs to apply for incorporates a moral dimension? When a decision isn’t just about solving a problem or moving closer to a goal, but about our values and whether we can sleep at night?

These questions might not matter if the co-pilot is more like our digital twin. A robust version of ourselves that can test decisions before we commit. We would grow to trust it, not questioning its advice because it’s imperceptibly close to what we would think anyway. And if it did seem to oppose our instincts, we know that the co-pilot is much smarter than we are, so we’ll go with its suggestion. Do we retain agency, or an illusion of agency? We are, after all, the ones that act in the physical world. We’re just doing it on the advice of an AI.

Fundamental or Adaptive Morality

For millennia, philosophers of every type have tried to define good and bad and how to decide what lies in which category. To define morality as an overarching code that completely governs our lives, or as a set of rules of thumb to help up cope with everyday challenges. Any time we think we have an absolute, unshakable moral certainty, some situation arises that creates a dilemma, forcing us to establish myriad exceptions. Despite the ambiguity of morality, most humans think of themselves as moral. As guided by a certain sense of right and wrong.

There are those that cling to the idea that all morality has an ultimately religious or spiritual origin. In some cases, that means rules and ideas passed down over centuries via books and oral traditions. For others, there is no hard evidence for what is right and wrong, rather it’s a divinely-inspired, unfathomable part of humanity. Others still argue that morality is akin to a set of subconscious instructions on how best to maximise our chances of survival and happiness.

Practically speaking, the foundations of morality lie in how humanity, this species with a very particular type of brain, has adapted and thrived. Established by our hominidae forebears, those groups that placed cooperation, influence, long-termism, and the protection of others at their centre, over time, survived and flourished. The behaviours that best preserved and promoted health and security became the memes that characterised humanity. Whilst these inherited behaviours may have changed over time, we feel there’s an irreducible core to them. It may seem vague, but it’s usually something to do with helping others, behaving in a way we’d want others to behave, and of fairness. Beyond those basic ideas, context and culture starts to complicate things.

Alongside notions of morality comes conscience, the psychological compass that causes discomfort when we ignore or misread it. Frequently likened to an inner voice, it is both guru and judge. With it comes many questions. How consistent and reliable is it? How does it help us achieve our goals? Are ‘bad’ people lacking a conscience, or can they just silence it? Are they so consumed by rage or envy or psychosis that their conscience stops working? Or is there something more powerful than conscience? And what about doing good things for bad reasons, or vice versa? Are we driven primarily and instinctively by self-preservation, or the preservation of our species, culture or ideas?

Even when we make decisions that are in line with our conscience, luck can turn a good decision into a bad one. We are not the authors of our own lives. We are contributors — significant ones, but still one amongst millions — who often have to make a best guess at what to do. We are the products of when, where and to whom we are born, as well as the actions of those with whom we share a community, country or planet. Whether we develop a conscience, or are we born with it, it is undoubtedly shaped by these influences. Yet its elusiveness makes us question whether it is real at all.

The Unknowable Conscience

In Disney’s 1940 film Pinocchio, the puppet-turned-boy is gifted a conscience in the guise of a dapper cricket named Jiminy. His role is to guide Pinocchio, who has not had the opportunity to learn such things by experience, through the real-world maze of right and wrong. In the 1883 Italian book The Adventures of Pinocchio, the tone is a little more brutal. The (unnamed) Talking Cricket is the narrator of the story whom Pinocchio kills with a mallet during an argument. In a Shakespearean twist, the Cricket returns as a ghost to continue to guide Pinocchio, only to have his advice ignored. The ex-puppet comes a cropper, but the (now spectral) Cricket doesn’t abandon Pinocchio. Despite being killed and dismissed by Pinocchio, the Cricket never leaves him, and never stops trying to help. But the Cricket is always its own entity. Wise and moral, it offers Pinocchio prudent advice, but cannot force the boy-puppet to act on it.

Conscience is that which looks inwards; that knows us and assesses us. Although intimately linked to consciousness, conscience is narrower. Consciousness is grounded in broad awareness, whilst conscience is knowledge — of oneself, the world, and others — and how these things interact, as well as the concept of rules and values. Conscience is heavily shaped by outside factors such as religion and culture, but also the actions of other humans — what we see them do, how they treat us and how we would treat them. It is shaped by, and shapes how we see the world, expect the world to be, and want it to be. Yet we often only realise that our conscience is at play in retrospect. When we ask ourselves questions or replay events in our mind. In this, our decisions in the moment are often influenced less by the voice in our heads, but by trying to guess what the voice will say later, once we have time to reflect.

Conscience is complex and nuanced. Even though we know we are acting in the wrong way, we overlook it because of circumstances. We feel forced into acting a way counter to our conscience and then justifying it. We condemn others for behaviours very similar to those we undertake ourselves. Our conscience can change, suddenly or over time, when exposed to new information. We even imagine our conscience as a third party observing us — a narrator, a version of ourselves, a god even — and judging us. We give our conscience a voice. We give it huge authority because we trust it to guide us. When we ignore it, the sense of guilt or failure can plague us, can refuse to be silenced, and ultimately damage us in very real ways.

A conscience, however, is not a thing in itself. It is not an area of the brain or single psychological phenomenon. It has, as Paul Strohm has said, “an ‘identity problem’…it possesses no fixed or inherited content of its own, and it can be hailed and mobilized in defence of one position or equally in defence of its rival.” An act that one person sees as utterly immoral and against their conscience, another person may believe the exact opposite. They both feel the same commitment to their conscience. They both feel the same guilt if they act in opposition to their conscience. How these two people reach radically opposing views of the exact same event can be down to experience, but also a wider, external idea of morality. Yet external ideas of morality are not solely responsible for what someone believes. An acquired sense of morality can be deeply held until it meets with experience or evidence that contradicts it. In other cases, experience confirms, even radicalises a sense of right and wrong - proof once more of how irrational our species can be.

Those with similar moral beliefs, cultures and consciences can still act in different ways. Some people feel compelled to intervene in an act or even a conversation that they believe is wrong. Others don’t want to get involved. Both have good reasons for their action or inaction. Those that ‘take a stand’ are often lauded, but may also open themselves up to attack. Others are less confident in their beliefs or their ability to express or defend them. Yet people berate themselves for lacking ‘moral courage’ or ‘the courage of their convictions’. They desperately justify their inaction and promise to do better in the future. Yet the conscience can be too judgemental. Decisions are made in complex, fast-moving, volatile situations and we do our best. Conscience is less likely to be a positive voice, being more frequently linked to negative feelings — remorse, guilt, shame. We act in ways to avoid conscience gnawing away at our self-esteem or self-image.

The complexity of conscience spirals on and on. How do leaders balance doing the right thing for one group but to the detriment of another? How do we cope with unintended consequences? Does belief matter more or less than behaviour? Should people be compelled to behave in certain ways, even if they don’t believe them to be right? Being compelled to a particular behaviour, by either reward or punishment, doesn’t alter one’s beliefs, yet to an outsider, what’s the difference? Once that behaviour becomes the norm, what then for the conscience of others that don’t believe in it?

Simply because we believe something to be morally right, and our conscience acts in accordance, doesn’t mean we are ultimately correct in that belief. Although not everyone is comfortable with the thought, it is demonstrably the case that our beliefs can be changed, and with it, how our consciences react. John Stuart Mill suggested that by allowing (and listening to) the expression of any opinion, and in particular ill-informed or mistaken opinions, fundamental truths will emerge as a result. (It is important to distinguish between an opinion expressed and an action taken.) These truths would then shape our beliefs. These beliefs, having been tested in the fire of every type of opposing but ultimately wrong opinion, can now safely, rationally and without remorse, be acted upon. Our consciences would be clear in the knowledge we did the right thing based on all available evidence. How can we have a conscience that is so reliable?

Redefining Self, Conscience, Free Will and Values

Discussions about AI tend to be around how it will affect our outer lives — how (or even if) we work, how we interact with governments and companies, our place as the dominant species on the planet and even our very existence. We look less at how it will affect our inner lives — how we think about ourselves, our values and the world. If, as David Runciman suggests, AI is the latest tool to help us deal with complex decisions, only this time on an individual basis, what does that say about how we will behave and interpret our world?

In the film Pinocchio, Jiminy Cricket was the gift of the wish-granting fairy godmother. Today, it could be said, that which knows and sometimes grants our wishes, are tech companies. Tech companies are often said to know us better than we know ourselves. Can AI match that individual-level knowledge with all of the content of the internet and provide answers to our most difficult questions? Can it offer the right words of advice when things go wrong? Pose the right questions? Can it steer us in the right direction? And what does ‘right’ mean anyway? Right for whom? Right for now or in the future? Right for our personal goals or for others? If the AI Cricket is to truly help us, whether we listen to it or not, it must know what is right and wrong — for us and also in some absolute sense. In a sense that speaks to our conscience.

In guiding us towards what is best — practically, morally, personally — would AI be redefining what we mean by a sense of self? Can it change our values, purpose and meaning? Our conscience has evolved into something that informs, governs, judges and comments on our behaviours. It combines our goals and values with that of the groups to which we belong. Can AI help us to live by the values we hold, but sometimes forget, or that are overwhelmed by emotions and biases? Is AI the thing to redress the balance? A calm, sage voice in the chaos? Can it understand our true selves better than we can, and help us to achieve our real goals and ambitions (even if we’re not fully aware of them) whilst still allowing us free will?

Free will — as was once said of Keynesian economics — works in practice but not in theory. When people talk about the illusion of free will (or the illusion of consciousness or of conscience) we rarely focus on the ‘illusion’ bit of the phrase. Illusions are real, they’re just not what we think they are. From a deterministic perspective, it’s said that with enough data, we could know everything that has happened and will happen. That ultimately everything is down to physics — from neurons to gravity; from the Big Bang onwards — everything follows a path. A causal relationship between every atom, cell and protein. Apart from some radicals, however, few people would go so far as denying completely some degree of free will; some choice in our lives and control over our actions. Even if it’s a limited sort of free will. A free will hedged in by genetics and culture and experience, as well as societal pressures. Perhaps the real philosophical challenge is not whether free will is limited, but to know from where those limits emerge. Is this interactive, co-pilot AI just another limit; a guardrail to our free will?

The AI Paradox: Influencing or Dictating Values

Our conscience is a fundamental part of our selves, or our sense of self. It is consistent, but changeable (given the right circumstances). It’s a network of memories underpinned by imagination. It’s our culture, society, upbringing, genetics and experiences, but it’s much more as well. It can extrapolate into the future, react to the present, and assess the past, all with the aim of helping us make better choices. To what degree could AI augment or even replicate this? Can AI help us to make the right decisions, or just to live with the decisions we take.

The threat most frequently cited is that AI will replace swathes of the workforce — unskilled, skilled, and professional — particularly process-driven and analytical professions. Many say that the roles most immune to AI disruption are human-centric jobs. Those that require creative or abstract thought, imagination and empathy — artists, teachers, therapists, religious leaders. Yet AI has already proven to be competent in teaching, and is well on the way to becoming a popular therapy device — one that is anonymous and non-judgmental, delivering tailored advice and exercises.

If AI can be an effective (if incomplete) therapist and teacher, can it also be our own philosopher, observing how we deal with the world, not judging us but helping us to see more clearly what we do and why? Assisting us in making the best decisions, not just for our outward goals of love or influence or money, but for our inner selves as well? Can AI help us live with ourselves and the decisions we make? Can it, as psychiatry often does, address issues around conscience — feelings of guilt and judgement — as well as guiding us in ways that are better aligned to our values.

Having, if not replaced, then at least reimagined the roles of therapist, mentor and philosophical guide, what next for our AI co-pilot? How would it affect our choice of political leaders? Our political choices are often reflections of our conscience. Whether a person votes or campaigns for entirely altruistic or entirely self-interested reasons (or, more likely, somewhere between the two), that is still a translation of their conscience. Would our co-pilot sway our voting choices towards the more honest, more progressive leader or party? How will it translate the political balancing of individual rights — to freedom of expression, property, happiness, dignity — with the group best placed to achieve a level of collective security and wellbeing?

Again, we return to the heart of the so-called AI debate — even benign, positive applications of AI quickly twist into dark, destructive and scary uses. Especially when we see the near unlimited influence of deeply flawed tech leaders. A technology that could do so much to inform and guide society in affirmative, forward-looking ways can so easily be used to manipulate. To return to David Runciman, what is true of AI is also true of the state and the corporation — of physical technologies and informational ones.

Furthermore, Runciman highlights how AI has emerged in a historical context. Humans created states, those two together created companies, those three created AI. These things — these machines — have been created in the image of humans. Without humans, none of them could have appeared. States and corporations are unnatural — artificial. They are imperfect, and may always be, but they can be reengineered or rebuilt. As such, they can be reengineered to make the right sort of AI.

Responsibility, Responses and Ideals

Conscience is the manifestation of an individual value system and a mental interlocutor. It is established and influenced in varying degrees by an interweaving of environmental and genetic factors. Its complexity makes it practically impossible to fully understand, let alone replicate. On a practical level, most of us see it as the voice that helps us navigate the world and to make choices. Conscience could never be replaced by AI, but it is susceptible to outside influence. Conscience learns and adapts when it comes into contact with knowledge, charisma, authority, and ideas. It is shaped by both first hand experiences, but also by stories, empathy and imagination. AI could be a new medium for some of those stories, ideas and authority.

We could see our personalised AI co-pilot (or perhaps a navigator to our role as captain) less like an intuitive, smarter form of search engine, and more like a silicon version of Philip Pulman’s daemons. Electronic spirit animals that represent a core part of us. They can never separate themselves from us. They live with us, depend on us, yet also see the world differently, and combine these sources. We feed it information about ourselves and it gives us guidance in return. It uses what it knows about us to filter the vast quantities of information that exists about the world. It becomes wise because it, as John Stuart Mill said, is exposed to all opinions, especially the wrong ones, and so helps us to find the truth. It shields us from the false and the malign, but doesn’t ignore it.

It is not about AI’s ability to embrace morality itself, but its ability to influence our sense of morality. To be a familiar voice that guides and teaches us, just as nation states and commerce have influenced us. Personal and societal values, morality and decision-making maintain a complex interdependence. Adding AI into the mix creates a new challenge to our understanding of, and cooperation between technology and humanity.

If AI is the technological generation of goal-orientated, adaptive behaviour, and our personal AI adapts to our goals, is it not beholden on that AI to prevent us from making mistakes? And if our goals are malign or dangerous, what should it do? Try to dissuade us? Question why we are making that choice? Or call the authorities in a Minority Report-style attempt to thwart us? Will the co-pilot know the difference between impact and intent, and focus more on the former?

If AI redefines the idea of self, alters how we make decisions, and reframes our ideas of action, purpose and meaning, then we all need to learn. We need to know how to guide AI so that it can better guide us. We need to be more familiar with philosophy and psychology, critical thinking and risk assessment. AI could help us know ourselves better or it could know us in order to manipulate us. But AI should not become a binary choice. Like Runciman and Suleyman, we should be AI meliorists — not opting for pessimism or optimism as a part of our identity. Rather, we know whether things turn out good or ill is largely down to our actions. Taking responsibility is difficult and intimidating, but necessary.

Just as we all have a stake in and influence over the state and the commercial world, so we have a stake in AI. The state and the corporation are strong because of their long-termism, their specialisation, and their diverse natures, but they still rely on humans. That will also be true of AI. The state and the company are big, complicated, technical things, but we don’t need to understand them on a technical level in order to influence them. That will also be true of AI. We need to understand their goals, their place in the world, their strengths and weaknesses. We need to understand how to influence the state and business, because they will influence AI. We need to understand just how much control we have, and learn to use it effectively. And we need to understand who we are as a species, because it is us humans that are at the root of all of this — the state, the corporation, and the artificial intelligence. The physical, informational and biological. Do humans surrender to fatalism and say we cannot exercise any control over these things? Or do we realise our collective abilities and responsibilities, and in small ways and large ways, improve these entities — make them more human.

© 2024 Simon King.

Simon King is the author of the books Predictability — Our Search for Certainty in an Uncertain World and We Are All Leaders — Good Leadership and Why We’re All Capable Of It. He looks at the intersection of work, culture, society and technology.

--

--

Simon King

Writing stuff about work, culture, communication and technology.