Topic 5 of 5 18 min

Ethical Concepts, Biases, and Moral Theories

Learning Objectives

  • Identify the major cognitive biases that distort individual and group thinking
  • Explain how ethical blind spots like bounded ethicality, ethical fading, and moral myopia lead to unethical behaviour
  • Distinguish between disinformation, misinformation, and malinformation
  • Define foundational moral concepts including moral absolutism, moral pluralism, moral reasoning, and the veil of ignorance
  • Compare deontology and utilitarianism as contrasting approaches to ethical judgement
  • Explain fiduciary duty and why it matters in professional relationships
Loading...

Ethical Concepts, Biases, and Moral Theories

Every day, your mind takes shortcuts. It filters what you see, shapes how you judge others, and quietly steers your decisions in directions you may not even notice. Some of these shortcuts help you function. Others lead you badly astray, especially when the stakes are high and ethical clarity matters most. Understanding these hidden patterns of thinking, along with the major philosophical frameworks that have been developed to guide moral judgement, is essential for anyone who wants to make sound ethical choices rather than merely believing they do.

How Your Mind Can Trick You: Cognitive Biases

A cognitive bias (a systematic error in thinking that distorts judgement) is not a sign of low intelligence. It is a feature of how every human brain works. Biases operate below conscious awareness, which is exactly what makes them dangerous. You do not choose to be biased; you simply are, unless you learn to recognise the patterns and actively correct for them.

Biases That Distort Personal Thinking

  • Confirmation bias — The tendency to seek out, favour, and remember information that supports what you already believe, while ignoring or dismissing evidence that contradicts it. A minister convinced that a particular policy is working will naturally pay more attention to reports showing positive results and brush aside data showing failure. This bias does not require dishonesty; it operates automatically.

  • Wishful thinking — A pattern of unrealistic, non-pragmatic thinking where decisions are based on what a person hopes or wants to be true rather than what evidence actually supports. It often serves to protect self-esteem or avoid confronting uncomfortable realities. A leader who insists on an impractical plan because they want it to succeed, despite all evidence to the contrary, is engaging in wishful thinking.

  • Overconfidence bias — The tendency to believe you are better at something than you objectively are. People commonly overestimate their own abilities in areas like driving, teaching, or decision-making. In governance, overconfidence can lead officials to take on tasks beyond their competence or ignore expert advice because they trust their own judgement too heavily.

  • Self-serving bias — The habit of attributing your successes to your own character and abilities while blaming your failures on external circumstances. When a project succeeds, the politician takes credit. When it fails, they blame the bureaucracy, the budget, or the opposition. This bias makes honest self-assessment very difficult.

  • Loss aversion — The psychological tendency to feel the pain of losing something more intensely than the pleasure of gaining something of equal or even greater value. People will often choose to avoid a possible loss rather than pursue an equivalent gain. This explains why individuals and institutions resist change even when the potential benefits clearly outweigh the risks.

  • Fundamental attribution error — The tendency to explain other people’s behaviour by pointing to their character (“she is lazy,” “he is careless”) while explaining your own behaviour through situational factors (“I was stuck in traffic,” “I had too much on my plate”). This leads to unfair judgements about others and excessive leniency toward yourself.

  • False consensus — The mistaken belief that your own opinions, values, or behaviours are more widely shared than they actually are. A person under false consensus assumes that most reasonable people think the way they do, which can make them dismissive of opposing views and blind to the diversity of moral perspectives around them.

Biases That Arise from Social Influence

Some biases do not come from inside your own head. They are created or amplified by the people and systems around you.

  • Pygmalion effect — The phenomenon where one person’s expectations of another actually shape that person’s performance. If a manager genuinely expects an employee to excel, they unconsciously provide more support, feedback, and opportunity, and the employee rises to meet those expectations. The reverse is equally true: low expectations can suppress performance.

  • Stereotyping — Holding an oversimplified, generalised belief about an entire category of people. “Men don’t cry.” “Women don’t play video games.” Stereotypes ignore the enormous variation within any group and lead to judgements about individuals based on the group they belong to rather than who they actually are.

  • Echo-chamber effect — A situation where your existing beliefs are reinforced through repeated exposure to the same kind of information. Social media platforms contribute to this by showing you content that matches your preferences and past behaviour. Over time, this shields you from opposing viewpoints and can gradually push your views toward more extreme positions. Radicalisation often follows this pattern.

  • Conformity bias — The tendency to take cues for proper behaviour from the actions of others rather than exercising your own independent judgement. When a mob forms and people join in without stopping to think about whether the action is right, conformity bias is at work. It explains why otherwise reasonable individuals can participate in or silently support harmful group actions like mob lynching.

  • Groupthink — A phenomenon that occurs within groups where the desire for harmony and consensus overrides honest critical thinking. Members suppress their doubts, avoid challenging the majority view, and collectively arrive at decisions that no individual member would have endorsed on their own. Groupthink has been behind some of history’s worst policy failures.

  • Obedience to authority — The tendency to follow orders from an authority figure even when those orders conflict with your personal ethical judgement. People comply not because they agree, but because the instruction comes from someone they perceive as having legitimate power. This dynamic helps explain how ordinary individuals can carry out deeply unethical directives within hierarchical systems.

  • Diffusion of responsibility — The phenomenon where individuals feel less personal obligation to act when other people are present. The classic example is a road accident victim lying on a busy street while dozens of passers-by walk on, each assuming that someone else will help. The larger the crowd, the weaker each person’s sense of individual responsibility becomes.

When Ethics Slips Away: Ethical Blind Spots

Cognitive biases distort how you think. Ethical blind spots go a step further: they distort whether you think about ethics at all. These are the mechanisms through which good, well-intentioned people end up behaving unethically, often without realising it.

  • Bounded ethicality — The idea that a person’s ability to make ethical decisions is inherently limited. These limitations can come from internal factors (cognitive overload, emotional stress, personal biases) or external ones (time pressure, organisational culture, incomplete information). Even someone who genuinely wants to do the right thing can make an unethical choice when these constraints narrow their view.

  • Ethical fading — A process closely related to moral disengagement (restructuring how you perceive reality so that your own harmful actions seem less harmful than they actually are). When ethical fading occurs, the moral dimension of a decision gradually disappears from view. People do not consciously decide to ignore ethics; they simply stop seeing the ethical issue at all. The harmful action has not changed, but the frame through which people perceive it has shifted until the ethical content becomes invisible.

  • Moral myopia — The inability to see ethical issues clearly. Think of it as being near-sighted about ethics. A person with moral myopia is not deliberately ignoring the moral dimension; they genuinely cannot perceive it. This often results from a lack of ethical training, narrow professional focus, or an environment where ethical questions are never raised.

  • Moral muteness — The choice to remain silent about ethical concerns even when you recognise them. Unlike moral myopia (where you cannot see the problem), moral muteness means you see the problem clearly but choose not to speak up, typically because of fear of consequences, desire to fit in, or a belief that raising ethical objections will not change anything.

  • Rationalisation — The process of constructing seemingly reasonable justifications for behaviour that you know, at some level, is wrong. “My father told me to steal” or “everyone else does it too” or “the rules are unfair anyway.” Rationalisation allows people to maintain a positive self-image while engaging in conduct that violates their own stated values.

  • Role morality — The tendency to lower your own ethical standards because you see yourself as playing a particular role that somehow excuses you from those standards. A celebrity who promotes a product they would never personally use might justify this by thinking, “I am just doing my job as a brand ambassador.” The role becomes a shield that separates professional behaviour from personal ethics.

Information Disorders: When Truth Gets Twisted

In a world flooded with information, understanding how truth can be distorted is itself an ethical skill. Three distinct types of information disorder operate in public life:

  • Disinformation — False information that is created and spread with a deliberate intent to deceive or cause harm. The creator knows the information is wrong and distributes it anyway. Propaganda campaigns, fabricated news stories, and deliberately falsified data all fall into this category.

  • Misinformation — False information that is spread without any harmful intent. The person sharing it genuinely believes it to be true. A well-meaning friend forwarding an inaccurate health tip on a messaging group is spreading misinformation, not disinformation.

  • Malinformation — True information that is deliberately weaponised to cause harm to a specific person or group. The facts themselves are accurate, but they are shared out of context, at a strategic moment, or with a framing designed to damage. Leaking someone’s private medical records to destroy their career uses true information as a weapon.

A closely related concept is the red herring (something that misleads or distracts from what is actually relevant). In public discourse, red herrings are used to steer attention away from uncomfortable truths. Political manifestos, for instance, sometimes introduce emotionally charged but ultimately irrelevant issues to distract voters from the real questions.

Core Concepts in Moral Philosophy

Beyond biases and blind spots, ethics rests on a set of foundational concepts that help us think clearly about right and wrong, responsibility, and the nature of moral life.

How We Think About Ethics

  • Cognition — The mental process of acquiring knowledge and understanding through thought, experience, and the senses. Cognition is the foundation of all moral thinking: before you can judge whether something is right or wrong, you first need to perceive, process, and understand the situation.

  • Moral reasoning — The process of thinking through whether an action is right or wrong by applying ethical principles, weighing consequences, and considering duties. It is the deliberate, conscious activity of working through moral questions rather than relying on gut feeling alone.

  • Moral cognition — The specific branch of cognition that deals with how people process moral information, form moral judgements, and decide between competing ethical demands. It bridges psychology and ethics, examining the mental mechanisms behind moral decision-making.

Positions on Moral Truth

People disagree deeply about whether moral rules are fixed or flexible. Two positions anchor opposite ends of this spectrum:

  • Moral absolutism — The view that certain actions are universally right or wrong, regardless of context, culture, or consequences. A moral absolutist would say that lying is always wrong, no matter the situation. There are no exceptions, no grey areas, and no room for “it depends.”

  • Moral pluralism — The view that there are multiple valid moral frameworks and that no single ethical theory captures the whole truth. A moral pluralist recognises that deontological, utilitarian, virtue-based, and other approaches each capture something important about morality, and that different situations may call for different frameworks.

Who Counts in Moral Decisions

  • Moral agent — Any being that has the capacity to make moral choices and can be held responsible for those choices. Adult human beings of sound mind are the clearest examples of moral agents. They can distinguish right from wrong and can be praised or blamed for their actions.

  • Subject of moral worth — Any being that deserves moral consideration, whether or not it can make moral choices itself. A newborn infant, an animal, or a person in a coma cannot exercise moral agency, but they still have moral worth: they can be harmed, and harming them raises genuine ethical questions.

Balancing and Sustaining Ethical Behaviour

  • Moral equilibrium — The psychological tendency to “balance out” ethical behaviour. After doing something good, a person may feel licensed to behave less ethically in the next situation, as though they have built up a moral credit balance. Conversely, after acting badly, some people feel driven to compensate with a good deed. This internal balancing act can undermine consistent ethical conduct.

  • Pro-social behaviour — Actions that are intended to benefit other people or society as a whole. Helping a stranger, volunteering, donating, cooperating, and standing up for someone being treated unfairly are all forms of pro-social behaviour. It is the behavioural expression of empathy and moral concern in action.

  • Incrementalism — The belief in or advocacy of change through small, gradual steps rather than sudden, sweeping transformation. In governance, incrementalism means making reforms bit by bit, testing what works, and building on each step. While it avoids the risks of radical change, critics argue that it can also be used to delay urgently needed action.

Trust, Contracts, and Fairness

  • Fiduciary duty — A relationship between two parties where one is obligated to act solely in the interest of the other. The person in the position of trust (the fiduciary) must put the other party’s interests above their own. Lawyer-client, doctor-patient, and accountant-client relationships are classic examples. Violating a fiduciary duty is a serious ethical and often legal breach.

  • Social contract theory — The idea that the rules and structures of society rest on an implicit agreement among its members. People give up certain freedoms (like the freedom to take whatever you want by force) in exchange for the benefits of an organised society (safety, order, shared resources). Governments derive their legitimacy from this unwritten contract, and when they break it, citizens have grounds to hold them accountable.

  • Veil of ignorance — A thought experiment proposed by the philosopher John Rawls. Imagine you are designing the rules of a society, but you do not know what position you will occupy in it. You do not know whether you will be rich or poor, healthy or disabled, powerful or marginalised. Behind this “veil,” you would design rules that are as fair as possible to everyone, because you might end up in the most disadvantaged position yourself. The veil of ignorance is a tool for testing whether a policy or rule is truly just.

Three Ways to Judge Right and Wrong: Moral Theories

All the concepts above feed into a larger question: how should we actually decide what is ethical? Three major theories offer different answers.

Deontology: The Means Define Morality

Deontology (from the Greek word for “duty”) holds that the morality of an action lies in the action itself, not in its consequences. Certain acts are inherently right or wrong. Lying is always wrong, even if a lie might save someone’s life. Keeping a promise is always right, even if breaking it would produce a better outcome.

The strength of deontology is its clarity. You always know where you stand. The difficulty is that it can produce rigid conclusions that feel deeply unfair in specific situations. A deontologist who insists on truth-telling must face the uncomfortable question: is it wrong to lie to protect an innocent person from harm?

Utilitarianism: The Outcome Is What Matters

Utilitarianism takes the opposite approach. The morality of an action is determined entirely by its consequences. An action is ethical if it produces the maximum good for the maximum number of people. The means matter far less than the ends.

This framework drives much of public policy thinking. Cost-benefit analyses, welfare programmes, and infrastructure decisions often follow utilitarian logic: what choice benefits the most people? The challenge is that utilitarianism can justify harming a minority if the majority benefits, and it requires predicting consequences accurately, which is rarely possible in complex situations.

Applied Ethics: Where Theory Meets Reality

Applied ethics is the practice of taking ethical theories (deontology, utilitarianism, virtue ethics, and others) and applying them to specific, real-world problems. Rather than debating abstract principles, applied ethics asks: what should a doctor do when a patient refuses life-saving treatment? How should a corporation balance profit with environmental responsibility? What are the ethical limits of artificial intelligence?

Applied ethics is where the study of morality becomes practical. It forces you to move beyond theoretical positions and grapple with the messy, context-dependent decisions that actual human beings face every day.