Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Research bias
  • What Is Selection Bias? | Definition & Examples

What Is Selection Bias? | Definition & Examples

Published on September 30, 2022 by Kassiani Nikolopoulou . Revised on May 1, 2023.

Selection bias refers to situations where research bias is introduced due to factors related to the study’s participants. Selection bias can be introduced via the methods used to select the population of interest, the sampling methods , or the recruitment of participants. It is also known as the selection effect .

Types of selection-bias

Selection bias may threaten the validity of your research, as the study population is not representative of the target population .

Table of contents

What is selection bias, types of selection bias, examples of selection bias, how to avoid selection bias, other types of research bias, frequently asked questions.

Selection bias occurs when the selection of subjects into a study (or their likelihood of remaining in the study) leads to a result that is systematically different to the target population.

Selection bias often occurs in observational studies where the selection of participants isn’t random, such as cohort studies, case-control studies, and cross-sectional studies . It also occurs in interventional studies or clinical trials due to poor randomization .

Selection bias is a form of systematic error . Systematic differences between participants and non-participants or between treatment and control groups can limit your ability to compare the groups and arrive at unbiased conclusions.

There are several potential sources of selection bias that can affect the study, either during the recruitment of participants or in the process of ensuring they remain in the study. These can include:

  • Flawed procedure used to select participants, such as poorly defined inclusion and exclusion criteria
  • External reasons that could explain why some participants want to participate in the study while others don’t
  • Whether some participants are more likely to be selected than others

Selection bias is a general term describing errors arising from factors related to the population being studied, but there are several types of selection bias:

  • Sampling bias or ascertainment bias occurs when some members of the intended population are less likely to be included than others. As a result, your sample is not representative of your population.
  • Attrition bias occurs when participants who drop out of a study are systematically different from those who remain.
  • Self-selection bias (or volunteer bias) arises when individuals decide entirely for themselves whether or not they want to participate in the study. Due to this, participants may differ from those who don’t—for example, in terms of motivation.
  • Survivorship bias is a form of logical error that leads researchers who study a group to draw conclusions by only focusing on examples of successful individuals (the “survivors”) rather than the group as a whole.
  • Nonresponse bias is observed when people who don’t respond to a survey are different in significant ways from those who do. Non-respondents may be unwilling or unable to participate, leading to their under-representation in the study.
  • Undercoverage bias occurs when some members of your population are not represented in the sample. It is common in convenience sampling , where you recruit a sample that’s easy to obtain.

Selection bias is introduced when data collection or data analysis is biased toward a specific subgroup of the target population.

If you only focus on current customers, their feedback is more likely to be positive than if you also included those who stopped shopping prior to checkout. While current customers had a positive enough experience to ultimately buy something, those who stopped shopping will have different insights. For example, they may be disappointed in the lack of service or the overall web design.

Because of selection bias, study findings do not reflect the target population as a whole.

The cases consist of 200 women with cervical cancer who were referred to Mass General Hospital for treatment. The women were referred to from different parts of the state of Massachusetts. The cases were given a questionnaire asking about their income, education level, employment status, etc.

Control subjects were recruited by interviewers going door to door in the area around the hospital between 9:00 a.m. and 5:00 p.m.

Selection bias can be avoided as you recruit and retain your sample population.

  • For non-probability sampling designs, such as observational studies, try to make the control group as comparable as possible to the treatment group. This method is called matching . Researchers match each treated unit with a non-treated unit of similar characteristics. This helps estimate the impact of a program or event for which it is not ethically or logistically feasible to randomize.
  • In experimental research, selection bias can be minimized by proper use of random assignment , ensuring that neither researchers nor participants know which group each participant is assigned to. Otherwise, knowledge of group assignment can taint the data.
  • Sampling bias can be avoided by carefully defining the target population and using probability sampling whenever possible. This ensures that all eligible participants have an equal chance of being included in the sample.

Cognitive bias

  • Confirmation bias
  • Baader–Meinhof phenomenon

Selection bias

  • Sampling bias
  • Ascertainment bias
  • Attrition bias
  • Self-selection bias
  • Survivorship bias
  • Nonresponse bias
  • Undercoverage bias
  • Hawthorne effect
  • Observer bias
  • Omitted variable bias
  • Publication bias
  • Pygmalion effect
  • Recall bias
  • Social desirability bias
  • Placebo effect

Common types of selection bias are:

  • Sampling bias or ascertainment bias
  • Volunteer or self-selection bias

Sampling bias occurs when some members of a population are systematically more likely to be selected in a sample than others.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Nikolopoulou, K. (2023, May 01). What Is Selection Bias? | Definition & Examples. Scribbr. Retrieved April 9, 2024, from https://www.scribbr.com/research-bias/selection-bias/

Is this article helpful?

Kassiani Nikolopoulou

Kassiani Nikolopoulou

Other students also liked, what is an observational study | guide & examples, cross-sectional study | definition, uses & examples, random vs. systematic error | definition & examples.

Cognitive Bias: How We Are Wired to Misjudge

Charlotte Ruhl

Research Assistant & Psychology Graduate

BA (Hons) Psychology, Harvard University

Charlotte Ruhl, a psychology graduate from Harvard College, boasts over six years of research experience in clinical and social psychology. During her tenure at Harvard, she contributed to the Decision Science Lab, administering numerous studies in behavioral economics and social psychology.

Learn about our Editorial Process

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

Have you ever been so busy talking on the phone that you don’t notice the light has turned green and it is your turn to cross the street?

Have you ever shouted, “I knew that was going to happen!” after your favorite baseball team gave up a huge lead in the ninth inning and lost?

Or have you ever found yourself only reading news stories that further support your opinion?

These are just a few of the many instances of cognitive bias that we experience every day of our lives. But before we dive into these different biases, let’s backtrack first and define what bias is.

Cognitive Bias and Judgement Error - Systematic Mental Pattern of Deviation from Norm or Rationality in Judgement - Conceptual Illustration

What is Cognitive Bias?

Cognitive bias is a systematic error in thinking, affecting how we process information, perceive others, and make decisions. It can lead to irrational thoughts or judgments and is often based on our perceptions, memories, or individual and societal beliefs.

Biases are unconscious and automatic processes designed to make decision-making quicker and more efficient. Cognitive biases can be caused by many things, such as heuristics (mental shortcuts) , social pressures, and emotions.

Broadly speaking, bias is a tendency to lean in favor of or against a person, group, idea, or thing, usually in an unfair way. Biases are natural — they are a product of human nature — and they don’t simply exist in a vacuum or in our minds — they affect the way we make decisions and act.

In psychology, there are two main branches of biases: conscious and unconscious. Conscious or explicit bias is intentional — you are aware of your attitudes and the behaviors resulting from them (Lang, 2019).

Explicit bias can be good because it helps provide you with a sense of identity and can lead you to make good decisions (for example, being biased towards healthy foods).

However, these biases can often be dangerous when they take the form of conscious stereotyping.

On the other hand, unconscious bias , or cognitive bias, represents a set of unintentional biases — you are unaware of your attitudes and behaviors resulting from them (Lang, 2019).

Cognitive bias is often a result of your brain’s attempt to simplify information processing — we receive roughly 11 million bits of information per second. Still, we can only process about 40 bits of information per second (Orzan et al., 2012).

Therefore, we often rely on mental shortcuts (called heuristics) to help make sense of the world with relative speed. As such, these errors tend to arise from problems related to thinking: memory, attention, and other mental mistakes.

Cognitive biases can be beneficial because they do not require much mental effort and can allow you to make decisions relatively quickly, but like conscious biases, unconscious biases can also take the form of harmful prejudice that serves to hurt an individual or a group.

Although it may feel like there has been a recent rise of unconscious bias, especially in the context of police brutality and the Black Lives Matter movement, this is not a new phenomenon.

Thanks to Tversky and Kahneman (and several other psychologists who have paved the way), we now have an existing dictionary of our cognitive biases.

Again, these biases occur as an attempt to simplify the complex world and make information processing faster and easier. This section will dive into some of the most common forms of cognitive bias.

Cognitive biases as systematic error in thinking and behavior outline diagram. Psychological mindset feeling with non logic judgment effects vector illustration.

Confirmation Bias

Confirmation bias is the tendency to interpret new information as confirmation of your preexisting beliefs and opinions while giving disproportionately less consideration to alternative possibilities.

Real-World Examples

Since Watson’s 1960 experiment, real-world examples of confirmation bias have gained attention.

This bias often seeps into the research world when psychologists selectively interpret data or ignore unfavorable data to produce results that support their initial hypothesis.

Confirmation bias is also incredibly pervasive on the internet, particularly with social media. We tend to read online news articles that support our beliefs and fail to seek out sources that challenge them.

Various social media platforms, such as Facebook, help reinforce our confirmation bias by feeding us stories that we are likely to agree with – further pushing us down these echo chambers of political polarization.

Some examples of confirmation bias are especially harmful, specifically in the context of the law. For example, a detective may identify a suspect early in an investigation, seek out confirming evidence, and downplay falsifying evidence.

Experiments

The confirmation bias dates back to 1960 when Peter Wason challenged participants to identify a rule applying to triples of numbers.

People were first told that the sequences 2, 4, 6 fit the rule, and they then had to generate triples of their own and were told whether that sequence fits the rule. The rule was simple: any ascending sequence.

But not only did participants have an unusually difficult time realizing this and instead devised overly-complicated hypotheses, they also only generated triples that confirmed their preexisting hypothesis (Wason, 1960).

Explanations

But why does confirmation bias occur? It’s partially due to the effect of desire on our beliefs. In other words, certain desired conclusions (ones that support our beliefs) are more likely to be processed by the brain and labeled as true (Nickerson, 1998).

This motivational explanation is often coupled with a more cognitive theory.

The cognitive explanation argues that because our minds can only focus on one thing at a time, it is hard to parallel process (see information processing for more information) alternate hypotheses, so, as a result, we only process the information that aligns with our beliefs (Nickerson, 1998).

Another theory explains confirmation bias as a way of enhancing and protecting our self-esteem.

As with the self-serving bias (see more below), our minds choose to reinforce our preexisting ideas because being right helps preserve our sense of self-esteem, which is important for feeling secure in the world and maintaining positive relationships (Casad, 2019).

Although confirmation bias has obvious consequences, you can still work towards overcoming it by being open-minded and willing to look at situations from a different perspective than you might be used to (Luippold et al., 2015).

Even though this bias is unconscious, training your mind to become more flexible in its thought patterns will help mitigate the effects of this bias.

Hindsight Bias

Hindsight bias refers to the tendency to perceive past events as more predictable than they actually were (Roese & Vohs, 2012). There are cognitive and motivational explanations for why we ascribe so much certainty to knowing the outcome of an event only once the event is completed.

 Hindsight Bias Example

When sports fans know the outcome of a game, they often question certain decisions coaches make that they otherwise would not have questioned or second-guessed.

And fans are also quick to remark that they knew their team was going to win or lose, but, of course, they only make this statement after their team actually did win or lose.

Although research studies have demonstrated that the hindsight bias isn’t necessarily mitigated by pure recognition of the bias (Pohl & Hell, 1996).

You can still make a conscious effort to remind yourself that you can’t predict the future and motivate yourself to consider alternate explanations.

It’s important to do all we can to reduce this bias because when we are overly confident about our ability to predict outcomes, we might make future risky decisions that could have potentially dangerous outcomes.

Building on Tversky and Kahneman’s growing list of heuristics, researchers Baruch Fischhoff and Ruth Beyth-Marom (1975) were the first to directly investigate the hindsight bias in the empirical setting.

The team asked participants to judge the likelihood of several different outcomes of former U.S. president Richard Nixon’s visit to Beijing and Moscow.

After Nixon returned back to the States, participants were asked to recall the likelihood of each outcome they had initially assigned.

Fischhoff and Beyth found that for events that actually occurred, participants greatly overestimated the initial likelihood they assigned to those events.

That same year, Fischhoff (1975) introduced a new method for testing the hindsight bias – one that researchers still use today.

Participants are given a short story with four possible outcomes, and they are told that one is true. When they are then asked to assign the likelihood of each specific outcome, they regularly assign a higher likelihood to whichever outcome they have been told is true, regardless of how likely it actually is.

But hindsight bias does not only exist in artificial settings. In 1993, Dorothee Dietrich and Matthew Olsen asked college students to predict how the U.S. Senate would vote on the confirmation of Supreme Court nominee Clarence Thomas.

Before the vote, 58% of participants predicted that he would be confirmed, but after his actual confirmation, 78% of students said that they thought he would be approved – a prime example of the hindsight bias. And this form of bias extends beyond the research world.

From the cognitive perspective, hindsight bias may result from distortions of memories of what we knew or believed to know before an event occurred (Inman, 2016).

It is easier to recall information that is consistent with our current knowledge, so our memories become warped in a way that agrees with what actually did happen.

Motivational explanations of the hindsight bias point to the fact that we are motivated to live in a predictable world (Inman, 2016).

When surprising outcomes arise, our expectations are violated, and we may experience negative reactions as a result. Thus, we rely on the hindsight bias to avoid these adverse responses to certain unanticipated events and reassure ourselves that we actually did know what was going to happen.

Self-Serving Bias

Self-serving bias is the tendency to take personal responsibility for positive outcomes and blame external factors for negative outcomes.

You would be right to ask how this is similar to the fundamental attribution error (Ross, 1977), which identifies our tendency to overemphasize internal factors for other people’s behavior while attributing external factors to our own.

The distinction is that the self-serving bias is concerned with valence. That is, how good or bad an event or situation is. And it is also only concerned with events for which you are the actor.

In other words, if a driver cuts in front of you as the light turns green, the fundamental attribution error might cause you to think that they are a bad person and not consider the possibility that they were late for work.

On the other hand, the self-serving bias is exercised when you are the actor. In this example, you would be the driver cutting in front of the other car, which you would tell yourself is because you are late (an external attribution to a negative event) as opposed to it being because you are a bad person.

From sports to the workplace, self-serving bias is incredibly common. For example, athletes are quick to take responsibility for personal wins, attributing their successes to their hard work and mental toughness, but point to external factors, such as unfair calls or bad weather, when they lose (Allen et al., 2020).

In the workplace, people attribute internal factors when they have hired for a job but external factors when they are fired (Furnham, 1982). And in the office itself, workplace conflicts are given external attributions, and successes, whether a persuasive presentation or a promotion, are awarded internal explanations (Walther & Bazarova, 2007).

Additionally, self-serving bias is more prevalent in individualistic cultures , which place emphasis on self-esteem levels and individual goals, and it is less prevalent among individuals with depression (Mezulis et al., 2004), who are more likely to take responsibility for negative outcomes.

Overcoming this bias can be difficult because it is at the expense of our self-esteem. Nevertheless, practicing self-compassion – treating yourself with kindness even when you fall short or fail – can help reduce the self-serving bias (Neff, 2003).

The leading explanation for the self-serving bias is that it is a way of protecting our self-esteem (similar to one of the explanations for the confirmation bias).

We are quick to take credit for positive outcomes and divert the blame for negative ones to boost and preserve our individual ego, which is necessary for confidence and healthy relationships with others (Heider, 1982).

Another theory argues that self-serving bias occurs when surprising events arise. When certain outcomes run counter to our expectations, we ascribe external factors, but when outcomes are in line with our expectations, we attribute internal factors (Miller & Ross, 1975).

An extension of this theory asserts that we are naturally optimistic, so negative outcomes come as a surprise and receive external attributions as a result.

Anchoring Bias

individualistic cultures is closely related to the decision-making process. It occurs when we rely too heavily on either pre-existing information or the first piece of information (the anchor) when making a decision.

For example, if you first see a T-shirt that costs $1,000 and then see a second one that costs $100, you’re more likely to see the second shirt as cheap as you would if the first shirt you saw was $120. Here, the price of the first shirt influences how you view the second.

 Anchoring Bias Example

Sarah is looking to buy a used car. The first dealership she visits has a used sedan listed for $19,000. Sarah takes this initial listing price as an anchor and uses it to evaluate prices at other dealerships.

When she sees another similar used sedan priced at $18,000, that price seems like a good bargain compared to the $19,000 anchor price she saw first, even though the actual market value is closer to $16,000.

When Sarah finds a comparable used sedan priced at $15,500, she continues perceiving that price as cheap compared to her anchored reference price.

Ultimately, Sarah purchases the $18,000 sedan, overlooking that all of the prices seemed like bargains only in relation to the initial high anchor price.

The key elements that demonstrate anchoring bias here are:

  • Sarah establishes an initial reference price based on the first listing she sees ($19k)
  • She uses that initial price as her comparison/anchor for evaluating subsequent prices
  • This biases her perception of the market value of the cars she looks at after the initial anchor is set
  • She makes a purchase decision aligned with her anchored expectations rather than a more objective market value

Multiple theories seek to explain the existence of this bias.

One theory, known as anchoring and adjustment, argues that once an anchor is established, people insufficiently adjust away from it to arrive at their final answer, and so their final guess or decision is closer to the anchor than it otherwise would have been (Tversky & Kahneman, 1992).

And when people experience a greater cognitive load (the amount of information the working memory can hold at any given time; for example, a difficult decision as opposed to an easy one), they are more susceptible to the effects of anchoring.

Another theory, selective accessibility, holds that although we assume that the anchor is not a suitable answer (or a suitable price going back to the initial example) when we evaluate the second stimulus (or second shirt), we look for ways in which it is similar or different to the anchor (the price being way different), resulting in the anchoring effect (Mussweiler & Strack, 1999).

A final theory posits that providing an anchor changes someone’s attitudes to be more favorable to the anchor, which then biases future answers to have similar characteristics as the initial anchor.

Although there are many different theories for why we experience anchoring bias, they all agree that it affects our decisions in real ways (Wegner et al., 2001).

The first study that brought this bias to light was during one of Tversky and Kahneman’s (1974) initial experiments. They asked participants to compute the product of numbers 1-8 in five seconds, either as 1x2x3… or 8x7x6…

Participants did not have enough time to calculate the answer, so they had to estimate based on their first few calculations.

They found that those who computed the small multiplications first (i.e., 1x2x3…) gave a median estimate of 512, but those who computed the larger multiplications first gave a median estimate of 2,250 (although the actual answer is 40,320).

This demonstrates how the initial few calculations influenced the participant’s final answer.

Availability Bias

Availability bias (also commonly referred to as the availability heuristic ) refers to the tendency to think that examples of things that readily come to mind are more common than what is actually the case.

In other words, information that comes to mind faster influences the decisions we make about the future. And just like with the hindsight bias, this bias is related to an error of memory.

But instead of being a memory fabrication, it is an overemphasis on a certain memory.

In the workplace, if someone is being considered for a promotion but their boss recalls one bad thing that happened years ago but left a lasting impression, that one event might have an outsized influence on the final decision.

Another common example is buying lottery tickets because the lifestyle and benefits of winning are more readily available in mind (and the potential emotions associated with winning or seeing other people win) than the complex probability calculation of actually winning the lottery (Cherry, 2019).

A final common example that is used to demonstrate the availability heuristic describes how seeing several television shows or news reports about shark attacks (or anything that is sensationalized by the news, such as serial killers or plane crashes) might make you think that this incident is relatively common even though it is not at all.

Regardless, this thinking might make you less inclined to go in the water the next time you go to the beach (Cherry, 2019).

As with most cognitive biases, the best way to overcome them is by recognizing the bias and being more cognizant of your thoughts and decisions.

And because we fall victim to this bias when our brain relies on quick mental shortcuts in order to save time, slowing down our thinking and decision-making process is a crucial step to mitigating the effects of the availability heuristic.

Researchers think this bias occurs because the brain is constantly trying to minimize the effort necessary to make decisions, and so we rely on certain memories – ones that we can recall more easily – instead of having to endure the complicated task of calculating statistical probabilities.

Two main types of memories are easier to recall: 1) those that more closely align with the way we see the world and 2) those that evoke more emotion and leave a more lasting impression.

This first type of memory was identified in 1973, when Tversky and Kahneman, our cognitive bias pioneers, conducted a study in which they asked participants if more words begin with the letter K or if more words have K as their third letter.

Although many more words have K as their third letter, 70% of participants said that more words begin with K because the ability to recall this is not only easier, but it more closely aligns with the way they see the world (knowing the first letter of any word is infinitely more common than the third letter of any word).

In terms of the second type of memory, the same duo ran an experiment in 1983, 10 years later, where half the participants were asked to guess the likelihood of a massive flood would occur somewhere in North America, and the other half had to guess the likelihood of a flood occurring due to an earthquake in California.

Although the latter is much less likely, participants still said that this would be much more common because they could recall specific, emotionally charged events of earthquakes hitting California, largely due to the news coverage they receive.

Together, these studies highlight how memories that are easier to recall greatly influence our judgments and perceptions about future events.

Inattentional Blindness

A final popular form of cognitive bias is inattentional blindness . This occurs when a person fails to notice a stimulus that is in plain sight because their attention is directed elsewhere.

For example, while driving a car, you might be so focused on the road ahead of you that you completely fail to notice a car swerve into your lane of traffic.

Because your attention is directed elsewhere, you aren’t able to react in time, potentially leading to a car accident. Experiencing inattentional blindness has its obvious consequences (as illustrated by this example), but, like all biases, it is not impossible to overcome.

Many theories seek to explain why we experience this form of cognitive bias. In reality, it is probably some combination of these explanations.

Conspicuity holds that certain sensory stimuli (such as bright colors) and cognitive stimuli (such as something familiar) are more likely to be processed, and so stimuli that don’t fit into one of these two categories might be missed.

The mental workload theory describes how when we focus a lot of our brain’s mental energy on one stimulus, we are using up our cognitive resources and won’t be able to process another stimulus simultaneously.

Similarly, some psychologists explain how we attend to different stimuli with varying levels of attentional capacity, which might affect our ability to process multiple stimuli simultaneously.

In other words, an experienced driver might be able to see that car swerve into the lane because they are using fewer mental resources to drive, whereas a beginner driver might be using more resources to focus on the road ahead and unable to process that car swerving in.

A final explanation argues that because our attentional and processing resources are limited, our brain dedicates them to what fits into our schemas or our cognitive representations of the world (Cherry, 2020).

Thus, when an unexpected stimulus comes into our line of sight, we might not be able to process it on the conscious level. The following example illustrates how this might happen.

The most famous study to demonstrate the inattentional blindness phenomenon is the invisible gorilla study (Most et al., 2001). This experiment asked participants to watch a video of two groups passing a basketball and count how many times the white team passed the ball.

Participants are able to accurately report the number of passes, but what they fail to notice is a gorilla walking directly through the middle of the circle.

Because this would not be expected, and because our brain is using up its resources to count the number of passes, we completely fail to process something right before our eyes.

A real-world example of inattentional blindness occurred in 1995 when Boston police officer Kenny Conley was chasing a suspect and ran by a group of officers who were mistakenly holding down an undercover cop.

Conley was convicted of perjury and obstruction of justice because he supposedly saw the fight between the undercover cop and the other officers and lied about it to protect the officers, but he stood by his word that he really hadn’t seen it (due to inattentional blindness) and was ultimately exonerated (Pickel, 2015).

The key to overcoming inattentional blindness is to maximize your attention by avoiding distractions such as checking your phone. And it is also important to pay attention to what other people might not notice (if you are that driver, don’t always assume that others can see you).

By working on expanding your attention and minimizing unnecessary distractions that will use up your mental resources, you can work towards overcoming this bias.

Preventing Cognitive Bias

As we know, recognizing these biases is the first step to overcoming them. But there are other small strategies we can follow in order to train our unconscious mind to think in different ways.

From strengthening our memory and minimizing distractions to slowing down our decision-making and improving our reasoning skills, we can work towards overcoming these cognitive biases.

An individual can evaluate his or her own thought process, also known as metacognition (“thinking about thinking”), which provides an opportunity to combat bias (Flavell, 1979).

This multifactorial process involves (Croskerry, 2003):

(a) acknowledging the limitations of memory, (b) seeking perspective while making decisions, (c) being able to self-critique, (d) choosing strategies to prevent cognitive error.

Many strategies used to avoid bias that we describe are also known as cognitive forcing strategies, which are mental tools used to force unbiased decision-making.

The History of Cognitive Bias

The term cognitive bias was first coined in the 1970s by Israeli psychologists Amos Tversky and Daniel Kahneman, who used this phrase to describe people’s flawed thinking patterns in response to judgment and decision problems (Tversky & Kahneman, 1974).

Tversky and Kahneman’s research program, the heuristics and biases program, investigated how people make decisions given limited resources (for example, limited time to decide which food to eat or limited information to decide which house to buy).

As a result of these limited resources, people are forced to rely on heuristics or quick mental shortcuts to help make their decisions.

Tversky and Kahneman wanted to understand the biases associated with this judgment and decision-making process.

To do so, the two researchers relied on a research paradigm that presented participants with some type of reasoning problem with a computed normative answer (they used probability theory and statistics to compute the expected answer).

Participants’ responses were then compared with the predetermined solution to reveal the systematic deviations in the mind.

After running several experiments with countless reasoning problems, the researchers were able to identify numerous norm violations that result when our minds rely on these cognitive biases to make decisions and judgments (Wilke & Mata, 2012).

Key Takeaways

  • Cognitive biases are unconscious errors in thinking that arise from problems related to memory, attention, and other mental mistakes.
  • These biases result from our brain’s efforts to simplify the incredibly complex world in which we live.
  • Confirmation bias , hindsight bias, mere exposure effect , self-serving bias , base rate fallacy , anchoring bias , availability bias , the framing effect ,  inattentional blindness, and the ecological fallacy are some of the most common examples of cognitive bias. Another example is the false consensus effect .
  • Cognitive biases directly affect our safety, interactions with others, and how we make judgments and decisions in our daily lives.
  • Although these biases are unconscious, there are small steps we can take to train our minds to adopt a new pattern of thinking and mitigate the effects of these biases.

Allen, M. S., Robson, D. A., Martin, L. J., & Laborde, S. (2020). Systematic review and meta-analysis of self-serving attribution biases in the competitive context of organized sport. Personality and Social Psychology Bulletin, 46 (7), 1027-1043.

Casad, B. (2019). Confirmation bias . Retrieved from https://www.britannica.com/science/confirmation-bias

Cherry, K. (2019). How the availability heuristic affects your decision-making . Retrieved from https://www.verywellmind.com/availability-heuristic-2794824

Cherry, K. (2020). Inattentional blindness can cause you to miss things in front of you . Retrieved from https://www.verywellmind.com/what-is-inattentional-blindness-2795020

Dietrich, D., & Olson, M. (1993). A demonstration of hindsight bias using the Thomas confirmation vote. Psychological Reports, 72 (2), 377-378.

Fischhoff, B. (1975). Hindsight is not equal to foresight: The effect of outcome knowledge on judgment under uncertainty. Journal of Experimental Psychology: Human Perception and Performance, 1 (3), 288.

Fischhoff, B., & Beyth, R. (1975). I knew it would happen: Remembered probabilities of once—future things. Organizational Behavior and Human Performance, 13 (1), 1-16.

Furnham, A. (1982). Explanations for unemployment in Britain. European Journal of social psychology, 12(4), 335-352.

Heider, F. (1982). The psychology of interpersonal relations . Psychology Press.

Inman, M. (2016). Hindsight bias . Retrieved from https://www.britannica.com/topic/hindsight-bias

Lang, R. (2019). What is the difference between conscious and unconscious bias? : Faqs. Retrieved from https://engageinlearning.com/faq/compliance/unconscious-bias/what-is-the-difference-between-conscious-and-unconscious-bias/

Luippold, B., Perreault, S., & Wainberg, J. (2015). Auditor’s pitfall: Five ways to overcome confirmation bias . Retrieved from https://www.babson.edu/academics/executive-education/babson-insight/finance-and-accounting/auditors-pitfall-five-ways-to-overcome-confirmation-bias/

Mezulis, A. H., Abramson, L. Y., Hyde, J. S., & Hankin, B. L. (2004). Is there a universal positivity bias in attributions? A meta-analytic review of individual, developmental, and cultural differences in the self-serving attributional bias. Psychological Bulletin, 130 (5), 711.

Miller, D. T., & Ross, M. (1975). Self-serving biases in the attribution of causality: Fact or fiction?. Psychological Bulletin, 82 (2), 213.

Most, S. B., Simons, D. J., Scholl, B. J., Jimenez, R., Clifford, E., & Chabris, C. F. (2001). How not to be seen: The contribution of similarity and selective ignoring to sustained inattentional blindness. Psychological Science, 12 (1), 9-17.

Mussweiler, T., & Strack, F. (1999). Hypothesis-consistent testing and semantic priming in the anchoring paradigm: A selective accessibility model. Journal of Experimental Social Psychology, 35 (2), 136-164.

Neff, K. (2003). Self-compassion: An alternative conceptualization of a healthy attitude toward oneself. Self and Identity, 2 (2), 85-101.

Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2 (2), 175-220.

Orzan, G., Zara, I. A., & Purcarea, V. L. (2012). Neuromarketing techniques in pharmaceutical drugs advertising. A discussion and agenda for future research. Journal of Medicine and Life, 5 (4), 428.

Pickel, K. L. (2015). Eyewitness memory. The handbook of attention , 485-502.

Pohl, R. F., & Hell, W. (1996). No reduction in hindsight bias after complete information and repeated testing. Organizational Behavior and Human Decision Processes, 67 (1), 49-58.

Roese, N. J., & Vohs, K. D. (2012). Hindsight bias. Perspectives on Psychological Science, 7 (5), 411-426.

Ross, L. (1977). The intuitive psychologist and his shortcomings: Distortions in the attribution process. In Advances in experimental social psychology (Vol. 10, pp. 173-220). Academic Press.

Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability. Cognitive Psychology, 5 (2), 207-232.

Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185 (4157), 1124-1131.

Tversky, A., & Kahneman, D. (1983). Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. Psychological Review , 90(4), 293.

Tversky, A., & Kahneman, D. (1992). Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and Uncertainty, 5 (4), 297-323.

Walther, J. B., & Bazarova, N. N. (2007). Misattribution in virtual groups: The effects of member distribution on self-serving bias and partner blame. Human Communication Research, 33 (1), 1-26.

Wason, Peter C. (1960), “On the failure to eliminate hypotheses in a conceptual task”. Quarterly Journal of Experimental Psychology, 12 (3): 129–40.

Wegener, D. T., Petty, R. E., Detweiler-Bedell, B. T., & Jarvis, W. B. G. (2001). Implications of attitude change theories for numerical anchoring: Anchor plausibility and the limits of anchor effectiveness. Journal of Experimental Social Psychology, 37 (1), 62-69.

Wilke, A., & Mata, R. (2012). Cognitive bias. In Encyclopedia of human behavior (pp. 531-535). Academic Press.

Further Information

Test yourself for bias.

  • Project Implicit (IAT Test) From Harvard University
  • Implicit Association Test From the Social Psychology Network
  • Test Yourself for Hidden Bias From Teaching Tolerance
  • How The Concept Of Implicit Bias Came Into Being With Dr. Mahzarin Banaji, Harvard University. Author of Blindspot: hidden biases of good people5:28 minutes; includes transcript
  • Understanding Your Racial Biases With John Dovidio, PhD, Yale University From the American Psychological Association11:09 minutes; includes transcript
  • Talking Implicit Bias in Policing With Jack Glaser, Goldman School of Public Policy, University of California Berkeley21:59 minutes
  • Implicit Bias: A Factor in Health Communication With Dr. Winston Wong, Kaiser Permanente19:58 minutes
  • Bias, Black Lives and Academic Medicine Dr. David Ansell on Your Health Radio (August 1, 2015)21:42 minutes
  • Uncovering Hidden Biases Google talk with Dr. Mahzarin Banaji, Harvard University
  • Impact of Implicit Bias on the Justice System 9:14 minutes
  • Students Speak Up: What Bias Means to Them 2:17 minutes
  • Weight Bias in Health Care From Yale University16:56 minutes
  • Gender and Racial Bias In Facial Recognition Technology 4:43 minutes

Journal Articles

  • An implicit bias primer Mitchell, G. (2018). An implicit bias primer. Virginia Journal of Social Policy & the Law , 25, 27–59.
  • Implicit Association Test at age 7: A methodological and conceptual review Nosek, B. A., Greenwald, A. G., & Banaji, M. R. (2007). The Implicit Association Test at age 7: A methodological and conceptual review. Automatic processes in social thinking and behavior, 4 , 265-292.
  • Implicit Racial/Ethnic Bias Among Health Care Professionals and Its Influence on Health Care Outcomes: A Systematic Review Hall, W. J., Chapman, M. V., Lee, K. M., Merino, Y. M., Thomas, T. W., Payne, B. K., … & Coyne-Beasley, T. (2015). Implicit racial/ethnic bias among health care professionals and its influence on health care outcomes: a systematic review. American journal of public health, 105 (12), e60-e76.
  • Reducing Racial Bias Among Health Care Providers: Lessons from Social-Cognitive Psychology Burgess, D., Van Ryn, M., Dovidio, J., & Saha, S. (2007). Reducing racial bias among health care providers: lessons from social-cognitive psychology. Journal of general internal medicine, 22 (6), 882-887.
  • Integrating implicit bias into counselor education Boysen, G. A. (2010). Integrating Implicit Bias Into Counselor Education. Counselor Education & Supervision, 49 (4), 210–227.
  • Cognitive Biases and Errors as Cause—and Journalistic Best Practices as Effect Christian, S. (2013). Cognitive Biases and Errors as Cause—and Journalistic Best Practices as Effect. Journal of Mass Media Ethics, 28 (3), 160–174.
  • Empathy intervention to reduce implicit bias in pre-service teachers Whitford, D. K., & Emerson, A. M. (2019). Empathy Intervention to Reduce Implicit Bias in Pre-Service Teachers. Psychological Reports, 122 (2), 670–688.

Print Friendly, PDF & Email

2.2 Overcoming Cognitive Biases and Engaging in Critical Reflection

Learning objectives.

By the end of this section, you will be able to:

  • Label the conditions that make critical thinking possible.
  • Classify and describe cognitive biases.
  • Apply critical reflection strategies to resist cognitive biases.

To resist the potential pitfalls of cognitive biases, we have taken some time to recognize why we fall prey to them. Now we need to understand how to resist easy, automatic, and error-prone thinking in favor of more reflective, critical thinking.

Critical Reflection and Metacognition

To promote good critical thinking, put yourself in a frame of mind that allows critical reflection. Recall from the previous section that rational thinking requires effort and takes longer. However, it will likely result in more accurate thinking and decision-making. As a result, reflective thought can be a valuable tool in correcting cognitive biases. The critical aspect of critical reflection involves a willingness to be skeptical of your own beliefs, your gut reactions, and your intuitions. Additionally, the critical aspect engages in a more analytic approach to the problem or situation you are considering. You should assess the facts, consider the evidence, try to employ logic, and resist the quick, immediate, and likely conclusion you want to draw. By reflecting critically on your own thinking, you can become aware of the natural tendency for your mind to slide into mental shortcuts.

This process of critical reflection is often called metacognition in the literature of pedagogy and psychology. Metacognition means thinking about thinking and involves the kind of self-awareness that engages higher-order thinking skills. Cognition, or the way we typically engage with the world around us, is first-order thinking, while metacognition is higher-order thinking. From a metacognitive frame, we can critically assess our thought process, become skeptical of our gut reactions and intuitions, and reconsider our cognitive tendencies and biases.

To improve metacognition and critical reflection, we need to encourage the kind of self-aware, conscious, and effortful attention that may feel unnatural and may be tiring. Typical activities associated with metacognition include checking, planning, selecting, inferring, self-interrogating, interpreting an ongoing experience, and making judgments about what one does and does not know (Hackner, Dunlosky, and Graesser 1998). By practicing metacognitive behaviors, you are preparing yourself to engage in the kind of rational, abstract thought that will be required for philosophy.

Good study habits, including managing your workspace, giving yourself plenty of time, and working through a checklist, can promote metacognition. When you feel stressed out or pressed for time, you are more likely to make quick decisions that lead to error. Stress and lack of time also discourage critical reflection because they rob your brain of the resources necessary to engage in rational, attention-filled thought. By contrast, when you relax and give yourself time to think through problems, you will be clearer, more thoughtful, and less likely to rush to the first conclusion that leaps to mind. Similarly, background noise, distracting activity, and interruptions will prevent you from paying attention. You can use this checklist to try to encourage metacognition when you study:

  • Check your work.
  • Plan ahead.
  • Select the most useful material.
  • Infer from your past grades to focus on what you need to study.
  • Ask yourself how well you understand the concepts.
  • Check your weaknesses.
  • Assess whether you are following the arguments and claims you are working on.

Cognitive Biases

In this section, we will examine some of the most common cognitive biases so that you can be aware of traps in thought that can lead you astray. Cognitive biases are closely related to informal fallacies. Both fallacies and biases provide examples of the ways we make errors in reasoning.

Connections

See the chapter on logic and reasoning for an in-depth exploration of informal fallacies.

Watch the video to orient yourself before reading the text that follows.

Cognitive Biases 101, with Peter Bauman

Confirmation bias.

One of the most common cognitive biases is confirmation bias , which is the tendency to search for, interpret, favor, and recall information that confirms or supports your prior beliefs. Like all cognitive biases, confirmation bias serves an important function. For instance, one of the most reliable forms of confirmation bias is the belief in our shared reality. Suppose it is raining. When you first hear the patter of raindrops on your roof or window, you may think it is raining. You then look for additional signs to confirm your conclusion, and when you look out the window, you see rain falling and puddles of water accumulating. Most likely, you will not be looking for irrelevant or contradictory information. You will be looking for information that confirms your belief that it is raining. Thus, you can see how confirmation bias—based on the idea that the world does not change dramatically over time—is an important tool for navigating in our environment.

Unfortunately, as with most heuristics, we tend to apply this sort of thinking inappropriately. One example that has recently received a lot of attention is the way in which confirmation bias has increased political polarization. When searching for information on the internet about an event or topic, most people look for information that confirms their prior beliefs rather than what undercuts them. The pervasive presence of social media in our lives is exacerbating the effects of confirmation bias since the computer algorithms used by social media platforms steer people toward content that reinforces their current beliefs and predispositions. These multimedia tools are especially problematic when our beliefs are incorrect (for example, they contradict scientific knowledge) or antisocial (for example, they support violent or illegal behavior). Thus, social media and the internet have created a situation in which confirmation bias can be “turbocharged” in ways that are destructive for society.

Confirmation bias is a result of the brain’s limited ability to process information. Peter Wason (1960) conducted early experiments identifying this kind of bias. He asked subjects to identify the rule that applies to a sequence of numbers—for instance, 2, 4, 8. Subjects were told to generate examples to test their hypothesis. What he found is that once a subject settled on a particular hypothesis, they were much more likely to select examples that confirmed their hypothesis rather than negated it. As a result, they were unable to identify the real rule (any ascending sequence of numbers) and failed to “falsify” their initial assumptions. Falsification is an important tool in the scientist’s toolkit when they are testing hypotheses and is an effective way to avoid confirmation bias.

In philosophy, you will be presented with different arguments on issues, such as the nature of the mind or the best way to act in a given situation. You should take your time to reason through these issues carefully and consider alternative views. What you believe to be the case may be right, but you may also fall into the trap of confirmation bias, seeing confirming evidence as better and more convincing than evidence that calls your beliefs into question.

Anchoring Bias

Confirmation bias is closely related to another bias known as anchoring. Anchoring bias refers to our tendency to rely on initial values, prices, or quantities when estimating the actual value, price, or quantity of something. If you are presented with a quantity, even if that number is clearly arbitrary, you will have a hard discounting it in your subsequent calculations; the initial value “anchors” subsequent estimates. For instance, Tversky and Kahneman (1974) reported an experiment in which subjects were asked to estimate the number of African nations in the United Nations. First, the experimenters spun a wheel of fortune in front of the subjects that produced a random number between 0 and 100. Let’s say the wheel landed on 79. Subjects were asked whether the number of nations was higher or lower than the random number. Subjects were then asked to estimate the real number of nations. Even though the initial anchoring value was random, people in the study found it difficult to deviate far from that number. For subjects receiving an initial value of 10, the median estimate of nations was 25, while for subjects receiving an initial value of 65, the median estimate was 45.

In the same paper, Tversky and Kahneman described the way that anchoring bias interferes with statistical reasoning. In a number of scenarios, subjects made irrational judgments about statistics because of the way the question was phrased (i.e., they were tricked when an anchor was inserted into the question). Instead of expending the cognitive energy needed to solve the statistical problem, subjects were much more likely to “go with their gut,” or think intuitively. That type of reasoning generates anchoring bias. When you do philosophy, you will be confronted with some formal and abstract problems that will challenge you to engage in thinking that feels difficult and unnatural. Resist the urge to latch on to the first thought that jumps into your head, and try to think the problem through with all the cognitive resources at your disposal.

Availability Heuristic

The availability heuristic refers to the tendency to evaluate new information based on the most recent or most easily recalled examples. The availability heuristic occurs when people take easily remembered instances as being more representative than they objectively are (i.e., based on statistical probabilities). In very simple situations, the availability of instances is a good guide to judgments. Suppose you are wondering whether you should plan for rain. It may make sense to anticipate rain if it has been raining a lot in the last few days since weather patterns tend to linger in most climates. More generally, scenarios that are well-known to us, dramatic, recent, or easy to imagine are more available for retrieval from memory. Therefore, if we easily remember an instance or scenario, we may incorrectly think that the chances are high that the scenario will be repeated. For instance, people in the United States estimate the probability of dying by violent crime or terrorism much more highly than they ought to. In fact, these are extremely rare occurrences compared to death by heart disease, cancer, or car accidents. But stories of violent crime and terrorism are prominent in the news media and fiction. Because these vivid stories are dramatic and easily recalled, we have a skewed view of how frequently violent crime occurs.

Another more loosely defined category of cognitive bias is the tendency for human beings to align themselves with groups with whom they share values and practices. The tendency toward tribalism is an evolutionary advantage for social creatures like human beings. By forming groups to share knowledge and distribute work, we are much more likely to survive. Not surprisingly, human beings with pro-social behaviors persist in the population at higher rates than human beings with antisocial tendencies. Pro-social behaviors, however, go beyond wanting to communicate and align ourselves with other human beings; we also tend to see outsiders as a threat. As a result, tribalistic tendencies both reinforce allegiances among in-group members and increase animosity toward out-group members.

Tribal thinking makes it hard for us to objectively evaluate information that either aligns with or contradicts the beliefs held by our group or tribe. This effect can be demonstrated even when in-group membership is not real or is based on some superficial feature of the person—for instance, the way they look or an article of clothing they are wearing. A related bias is called the bandwagon fallacy . The bandwagon fallacy can lead you to conclude that you ought to do something or believe something because many other people do or believe the same thing. While other people can provide guidance, they are not always reliable. Furthermore, just because many people believe something doesn’t make it true. Watch the video below to improve your “tribal literacy” and understand the dangers of this type of thinking.

The Dangers of Tribalism, Kevin deLaplante

Sunk cost fallacy.

Sunk costs refer to the time, energy, money, or other costs that have been paid in the past. These costs are “sunk” because they cannot be recovered. The sunk cost fallacy is thinking that attaches a value to things in which you have already invested resources that is greater than the value those things have today. Human beings have a natural tendency to hang on to whatever they invest in and are loath to give something up even after it has been proven to be a liability. For example, a person may have sunk a lot of money into a business over time, and the business may clearly be failing. Nonetheless, the businessperson will be reluctant to close shop or sell the business because of the time, money, and emotional energy they have spent on the venture. This is the behavior of “throwing good money after bad” by continuing to irrationally invest in something that has lost its worth because of emotional attachment to the failed enterprise. People will engage in this kind of behavior in all kinds of situations and may continue a friendship, a job, or a marriage for the same reason—they don’t want to lose their investment even when they are clearly headed for failure and ought to cut their losses.

A similar type of faulty reasoning leads to the gambler’s fallacy , in which a person reasons that future chance events will be more likely if they have not happened recently. For instance, if I flip a coin many times in a row, I may get a string of heads. But even if I flip several heads in a row, that does not make it more likely I will flip tails on the next coin flip. Each coin flip is statistically independent, and there is an equal chance of turning up heads or tails. The gambler, like the reasoner from sunk costs, is tied to the past when they should be reasoning about the present and future.

There are important social and evolutionary purposes for past-looking thinking. Sunk-cost thinking keeps parents engaged in the growth and development of their children after they are born. Sunk-cost thinking builds loyalty and affection among friends and family. More generally, a commitment to sunk costs encourages us to engage in long-term projects, and this type of thinking has the evolutionary purpose of fostering culture and community. Nevertheless, it is important to periodically reevaluate our investments in both people and things.

In recent ethical scholarship, there is some debate about how to assess the sunk costs of moral decisions. Consider the case of war. Just-war theory dictates that wars may be justified in cases where the harm imposed on the adversary is proportional to the good gained by the act of defense or deterrence. It may be that, at the start of the war, those costs seemed proportional. But after the war has dragged on for some time, it may seem that the objective cannot be obtained without a greater quantity of harm than had been initially imagined. Should the evaluation of whether a war is justified estimate the total amount of harm done or prospective harm that will be done going forward (Lazar 2018)? Such questions do not have easy answers.

Table 2.1 summarizes these common cognitive biases.

Think Like a Philosopher

As we have seen, cognitive biases are built into the way human beings process information. They are common to us all, and it takes self-awareness and effort to overcome the tendency to fall back on biases. Consider a time when you have fallen prey to one of the five cognitive biases described above. What were the circumstances? Recall your thought process. Were you aware at the time that your thinking was misguided? What were the consequences of succumbing to that cognitive bias?

Write a short paragraph describing how that cognitive bias allowed you to make a decision you now realize was irrational. Then write a second paragraph describing how, with the benefit of time and distance, you would have thought differently about the incident that triggered the bias. Use the tools of critical reflection and metacognition to improve your approach to this situation. What might have been the consequences of behaving differently? Finally, write a short conclusion describing what lesson you take from reflecting back on this experience. Does it help you understand yourself better? Will you be able to act differently in the future? What steps can you take to avoid cognitive biases in your thinking today?

As an Amazon Associate we earn from qualifying purchases.

This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.

Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute OpenStax.

Access for free at https://openstax.org/books/introduction-philosophy/pages/1-introduction
  • Authors: Nathan Smith
  • Publisher/website: OpenStax
  • Book title: Introduction to Philosophy
  • Publication date: Jun 15, 2022
  • Location: Houston, Texas
  • Book URL: https://openstax.org/books/introduction-philosophy/pages/1-introduction
  • Section URL: https://openstax.org/books/introduction-philosophy/pages/2-2-overcoming-cognitive-biases-and-engaging-in-critical-reflection

© Dec 19, 2023 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution License . The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

Identifying and Avoiding Bias in Research

This narrative review provides an overview on the topic of bias as part of Plastic and Reconstructive Surgery 's series of articles on evidence-based medicine. Bias can occur in the planning, data collection, analysis, and publication phases of research. Understanding research bias allows readers to critically and independently review the scientific literature and avoid treatments which are suboptimal or potentially harmful. A thorough understanding of bias and how it affects study results is essential for the practice of evidence-based medicine.

The British Medical Journal recently called evidence-based medicine (EBM) one of the fifteen most important milestones since the journal's inception 1 . The concept of EBM was created in the early 1980's as clinical practice became more data-driven and literature based 1 , 2 . EBM is now an essential part of medical school curriculum 3 . For plastic surgeons, the ability to practice EBM is limited. Too frequently, published research in plastic surgery demonstrates poor methodologic quality, although a gradual trend toward higher level study designs has been noted over the past ten years 4 , 5 . In order for EBM to be an effective tool, plastic surgeons must critically interpret study results and must also evaluate the rigor of study design and identify study biases. As the leadership of Plastic and Reconstructive Surgery seeks to provide higher quality science to enhance patient safety and outcomes, a discussion of the topic of bias is essential for the journal's readers. In this paper, we will define bias and identify potential sources of bias which occur during study design, study implementation, and during data analysis and publication. We will also make recommendations on avoiding bias before, during, and after a clinical trial.

I. Definition and scope of bias

Bias is defined as any tendency which prevents unprejudiced consideration of a question 6 . In research, bias occurs when “systematic error [is] introduced into sampling or testing by selecting or encouraging one outcome or answer over others” 7 . Bias can occur at any phase of research, including study design or data collection, as well as in the process of data analysis and publication ( Figure 1 ). Bias is not a dichotomous variable. Interpretation of bias cannot be limited to a simple inquisition: is bias present or not? Instead, reviewers of the literature must consider the degree to which bias was prevented by proper study design and implementation. As some degree of bias is nearly always present in a published study, readers must also consider how bias might influence a study's conclusions 8 . Table 1 provides a summary of different types of bias, when they occur, and how they might be avoided.

An external file that holds a picture, illustration, etc.
Object name is nihms-198809-f0001.jpg

Major Sources of Bias in Clinical Research

Tips to avoid different types of bias during a trial.

Chance and confounding can be quantified and/or eliminated through proper study design and data analysis. However, only the most rigorously conducted trials can completely exclude bias as an alternate explanation for an association. Unlike random error, which results from sampling variability and which decreases as sample size increases, bias is independent of both sample size and statistical significance. Bias can cause estimates of association to be either larger or smaller than the true association. In extreme cases, bias can cause a perceived association which is directly opposite of the true association. For example, prior to 1998, multiple observational studies demonstrated that hormone replacement therapy (HRT) decreased risk of heart disease among post-menopausal women 8 , 9 . However, more recent studies, rigorously designed to minimize bias, have found the opposite effect (i.e., an increased risk of heart disease with HRT) 10 , 11 .

II. Pre-trial bias

Sources of pre-trial bias include errors in study design and in patient recruitment. These errors can cause fatal flaws in the data which cannot be compensated during data analysis. In this section, we will discuss the importance of clearly defining both risk and outcome, the necessity of standardized protocols for data collection, and the concepts of selection and channeling bias.

Bias during study design

The definition of risk and outcome should be clearly defined prior to study implementation. Subjective measures, such as the Baker grade of capsular contracture, can have high inter-rater variability and the arbitrary cutoffs may make distinguishing between groups difficult 12 . This can inflate the observed variance seen with statistical analysis, making a statistically significant result less likely. Objective, validated risk stratification models such as those published by Caprini 13 and Davison 14 for venous thromboembolism, or standardized outcomes measures such as the Breast-Q 15 should have lower inter-rater variability and are more appropriate for use. When risk or exposure is retrospectively identified via medical chart review, it is prudent to crossreference data sources for confirmation. For example, a chart reviewer should confirm a patient-reported history of sacral pressure ulcer closure with physical exam findings and by review of an operative report; this will decrease discrepancies when compared to using a single data source.

Data collection methods may include questionnaires, structured interviews, physical exam, laboratory or imaging data, or medical chart review. Standardized protocols for data collection, including training of study personnel, can minimize inter-observer variability when multiple individuals are gathering and entering data. Blinding of study personnel to the patient's exposure and outcome status, or if not possible, having different examiners measure the outcome than those who evaluated the exposure, can also decrease bias. Due to the presence of scars, patients and those directly examining them cannot be blinded to whether or not an operation was received. For comparisons of functional or aesthetic outcomes in surgical procedures, an independent examiner can be blinded to the type of surgery performed. For example, a hand surgery study comparing lag screw versus plate and screw fixation of metacarpal fractures could standardize the surgical approach (and thus the surgical scar) and have functional outcomes assessed by a blinded examiner who had not viewed the operative notes or x-rays. Blinded examiners can also review imaging and confirm diagnoses without examining patients 16 , 17 .

Selection bias

Selection bias may occur during identification of the study population. The ideal study population is clearly defined, accessible, reliable, and at increased risk to develop the outcome of interest. When a study population is identified, selection bias occurs when the criteria used to recruit and enroll patients into separate study cohorts are inherently different. This can be a particular problem with case-control and retrospective cohort studies where exposure and outcome have already occurred at the time individuals are selected for study inclusion 18 . Prospective studies (particularly randomized, controlled trials) where the outcome is unknown at time of enrollment are less prone to selection bias.

Channeling bias

Channeling bias occurs when patient prognostic factors or degree of illness dictates the study cohort into which patients are placed. This bias is more likely in non-randomized trials when patient assignment to groups is performed by medical personnel. Channeling bias is commonly seen in pharmaceutical trials comparing old and new drugs to one another 19 . In surgical studies, channeling bias can occur if one intervention carries a greater inherent risk 20 . For example, hand surgeons managing fractures may be more aggressive with operative intervention in young, healthy individuals with low perioperative risk. Similarly, surgeons might tolerate imperfect reduction in the elderly, a group at higher risk for perioperative complications and with decreased need for perfect hand function. Thus, a selection bias exists for operative intervention in young patients. Now imagine a retrospective study of operative versus non-operative management of hand fractures. In this study, young patients would be channeled into the operative study cohort and the elderly would be channeled into the nonoperative study cohort.

III. Bias during the clinical trial

Information bias is a blanket classification of error in which bias occurs in the measurement of an exposure or outcome. Thus, the information obtained and recorded from patients in different study groups is unequal in some way 18 . Many subtypes of information bias can occur, including interviewer bias, chronology bias, recall bias, patient loss to follow-up, bias from misclassification of patients, and performance bias.

Interviewer bias

Interviewer bias refers to a systematic difference between how information is solicited, recorded, or interpreted 18 , 21 . Interviewer bias is more likely when disease status is known to interviewer. An example of this would be a patient with Buerger's disease enrolled in a case control study which attempts to retrospectively identify risk factors. If the interviewer is aware that the patient has Buerger's disease, he/she may probe for risk factors, such as smoking, more extensively (“Are you sure you've never smoked? Never? Not even once?”) than in control patients. Interviewer bias can be minimized or eliminated if the interviewer is blinded to the outcome of interest or if the outcome of interest has not yet occurred, as in a prospective trial.

Chronology bias

Chronology bias occurs when historic controls are used as a comparison group for patients undergoing an intervention. Secular trends within the medical system could affect how disease is diagnosed, how treatments are administered, or how preferred outcome measures are obtained 20 . Each of these differences could act as a source of inequality between the historic controls and intervention groups. For example, many microsurgeons currently use preoperative imaging to guide perforator flap dissection. Imaging has been shown to significantly reduce operative time 40 . A retrospective study of flap dissection time might conclude that dissection time decreases as surgeon experience improves. More likely, the use of preoperative imaging caused a notable reduction in dissection time. Thus, chronology bias is present. Chronology bias can be minimized by conducting prospective cohort or randomized control trials, or by using historic controls from only the very recent past.

Recall bias

Recall bias refers to the phenomenon in which the outcomes of treatment (good or bad) may color subjects' recollections of events prior to or during the treatment process. One common example is the perceived association between autism and the MMR vaccine. This vaccine is given to children during a prominent period of language and social development. As a result, parents of children with autism are more likely to recall immunization administration during this developmental regression, and a causal relationship may be perceived 22 . Recall bias is most likely when exposure and disease status are both known at time of study, and can also be problematic when patient interviews (or subjective assessments) are used as a primary data sources. When patient-report data are used, some investigators recommend that the trial design masks the intent of questions in structured interviews or surveys and/or uses only validated scales for data acquisition 23 .

Transfer bias

In almost all clinical studies, subjects are lost to follow-up. In these instances, investigators must consider whether these patients are fundamentally different than those retained in the study. Researchers must also consider how to treat patients lost to follow-up in their analysis. Well designed trials usually have protocols in place to attempt telephone or mail contact for patients who miss clinic appointments. Transfer bias can occur when study cohorts have unequal losses to follow-up. This is particularly relevant in surgical trials when study cohorts are expected to require different follow-up regimens. Consider a study evaluating outcomes in inferior pedicle Wise pattern versus vertical scar breast reductions. Because the Wise pattern patients often have fewer contour problems in the immediate postoperative period, they may be less likely to return for long-term follow-up. By contrast, patient concerns over resolving skin redundancies in the vertical reduction group may make these individuals more likely to return for postoperative evaluations by their surgeons. Some authors suggest that patient loss to follow-up can be minimized by offering convenient office hours, personalized patient contact via phone or email, and physician visits to the patient's home 20 , 24 .

Bias from misclassification of exposure or outcome

Misclassification of exposure can occur if the exposure itself is poorly defined or if proxies of exposure are utilized. For example, this might occur in a study evaluating efficacy of becaplermin (Regranex, Systagenix Wound Management) versus saline dressings for management of diabetic foot ulcers. Significantly different results might be obtained if the becaplermin cohort of patients included those prescribed the medication, rather than patients directly observed to be applying the medication. Similarly, misclassification of outcome can occur if non-objective measures are used. For example, clinical signs and symptoms are notoriously unreliable indicators of venous thromboembolism. Patients are accurately diagnosed by physical exam less than 50% of the time 25 . Thus, using Homan's sign (calf pain elicited by extreme dorsi-flexion) or pleuritic chest pain as study measures for deep venous thrombosis or pulmonary embolus would be inappropriate. Venous thromboembolism is appropriately diagnosed using objective tests with high sensitivity and specificity, such as duplex ultrasound or spiral CT scan 26 - 28 .

Performance bias

In surgical trials, performance bias may complicate efforts to establish a cause-effect relationship between procedures and outcomes. As plastic surgeons, we are all aware that surgery is rarely standardized and that technical variability occurs between surgeons and among a single surgeon's cases. Variations by surgeon commonly occur in surgical plan, flow of operation, and technical maneuvers used to achieve the desired result. The surgeon's experience may have a significant effect on the outcome. To minimize or avoid performance bias, investigators can consider cluster stratification of patients, in which all patients having an operation by one surgeon or at one hospital are placed into the same study group, as opposed to placing individual patients into groups. This will minimize performance variability within groups and decrease performance bias. Cluster stratification of patients may allow surgeons to perform only the surgery with which they are most comfortable or experienced, providing a more valid assessment of the procedures being evaluated. If the operation in question has a steep learning curve, cluster stratification may make generalization of study results to the everyday plastic surgeon difficult.

IV. Bias after a trial

Bias after a trial's conclusion can occur during data analysis or publication. In this section, we will discuss citation bias, evaluate the role of confounding in data analysis, and provide a brief discussion of internal and external validity.

Citation bias

Citation bias refers to the fact that researchers and trial sponsors may be unwilling to publish unfavorable results, believing that such findings may negatively reflect on their personal abilities or on the efficacy of their product. Thus, positive results are more likely to be submitted for publication than negative results. Additionally, existing inequalities in the medical literature may sway clinicians' opinions of the expected trial results before or during a trial. In recognition of citation bias, the International Committee of Medical Journal Editors(ICMJE) released a consensus statement in 2004 29 which required all randomized control trials to be pre-registered with an approved clinical trials registry. In 2007, a second consensus statement 30 required that all prospective trials not deemed purely observational be registered with a central clinical trials registry prior to patient enrollment. ICMJE member journals will not publish studies which are not registered in advance with one of five accepted registries. Despite these measures, citation bias has not been completely eliminated. While centralized documentation provides medical researchers with information about unpublished trials, investigators may be left to only speculate as to the results of these studies.

Confounding

Confounding occurs when an observed association is due to three factors: the exposure, the outcome of interest, and a third factor which is independently associated with both the outcome of interest and the exposure 18 . Examples of confounders include observed associations between coffee drinking and heart attack (confounded by smoking) and the association between income and health status (confounded by access to care). Pre-trial study design is the preferred method to control for confounding. Prior to the study, matching patients for demographics (such as age or gender) and risk factors (such as body mass index or smoking) can create similar cohorts among identified confounders. However, the effect of unmeasured or unknown confounders may only be controlled by true randomization in a study with a large sample size. After a study's conclusion, identified confounders can be controlled by analyzing for an association between exposure and outcome only in cohorts similar for the identified confounding factor. For example, in a study comparing outcomes for various breast reconstruction options, the results might be confounded by the timing of the reconstruction (i.e., immediate versus delayed procedures). In other words, procedure type and timing may have both have significant and independent effects on breast reconstruction outcomes. One approach to this confounding would be to compare outcomes by procedure type separately for immediate and delayed reconstruction patients. This maneuver is commonly termed a “stratified” analysis. Stratified analyses are limited if multiple confounders are present or if sample size is small. Multi-variable regression analysis can also be used to control for identified confounders during data analysis. The role of unidentified confounders cannot be controlled using statistical analysis.

Internal vs. External Validity

Internal validity refers to the reliability or accuracy of the study results. A study's internal validity reflects the author's and reviewer's confidence that study design, implementation, and data analysis have minimized or eliminated bias and that the findings are representative of the true association between exposure and outcome. When evaluating studies, careful review of study methodology for sources of bias discussed above enables the reader to evaluate internal validity. Studies with high internal validity are often explanatory trials, those designed to test efficacy of a specific intervention under idealized conditions in a highly selected population. However, high internal validity often comes at the expense of ability to be generalized. For example, although supra-microsurgery techniques, defined as anastamosis of vessels less than 0.5mm-0.8mm in diameter, have been shown to be technically possible in high volume microsurgery centers 31 - 33 (high internal validity), it is unlikely that the majority of plastic surgeons could perform this operation with an acceptable rate of flap loss.

External validity of research design deals with the degree to which findings are able to be generalized to other groups or populations. In contrast with explanatory trials, pragmatic trials are designed to assess the benefits of interventions under real clinical conditions. These studies usually include study populations generated using minimal exclusion criteria, making them very similar to the general population. While pragmatic trials have high external validity, loose inclusion criteria may compromise the study's internal validity. When reviewing scientific literature, readers should assess whether the research methods preclude generalization of the study's findings to other patient populations. In making this decision, readers must consider differences between the source population (population from which the study population originated) and the study population (those included in the study). Additionally, it is important to distinguish limited ability to be generalized due to a selective patient population from true bias 8 .

When designing trials, achieving balance between internal and external validity is difficult. An ideal trial design would randomize patients and blind those collecting and analyzing data (high internal validity), while keeping exclusion criteria to a minimum, thus making study and source populations closely related and allowing generalization of results (high external validity) 34 . For those evaluating the literature, objective models exist to quantify both external and internal validity. Conceptual models to assess a study's ability to be generalized have been developed 35 . Additionally, qualitative checklists can be used to assess the external validity of clinical trials. These can be utilized by investigators to improve study design and also by those reading published studies 36 .

The importance of internal validity is reflected in the existing concept of “levels of evidence” 5 , where more rigorously designed trials produce higher levels of evidence. Such high-level studies can be evaluated using the Jadad scoring system, an established, rigorous means of assessing the methodological quality and internal validity of clinical trials 37 . Even so-called “gold-standard” RCT's can be undermined by poor study design. Like all studies, RCT's must be rigorously evaluated. Descriptions of study methods should include details on the randomization process, method(s) of blinding, treatment of incomplete outcome data, funding source(s), and include data on statistically insignificant outcomes 38 . Authors who provide incomplete trial information can create additional bias after a trial ends; readers are not able to evaluate the trial's internal and external validity 20 . The CONSORT statement 39 provides a concise 22-point checklist for authors reporting the results of RCT's. Manuscripts that conform to the CONSORT checklist will provide adequate information for readers to understand the study's methodology. As a result, readers can make independent judgments on the trial's internal and external validity.

Bias can occur in the planning, data collection, analysis, and publication phases of research. Understanding research bias allows readers to critically and independently review the scientific literature and avoid treatments which are suboptimal or potentially harmful. A thorough understanding of bias and how it affects study results is essential for the practice of evidence-based medicine.

Acknowledgments

Dr. Pannucci receives salary support from the NIH T32 grant program (T32 GM-08616).

Meeting disclosure:

This work was has not been previously presented.

None of the authors has a financial interest in any of the products, devices, or drugs mentioned in this manuscript.

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Robert Evans Wilson Jr.

Cognitive Bias Is the Loose Screw in Critical Thinking

Recognizing your biases enhances understanding and communication..

Posted May 17, 2021 | Reviewed by Jessica Schrader

  • People cannot think critically unless they are aware of their cognitive biases, which can alter their perception of reality.
  • Cognitive biases are mental shortcuts people take in order to process the mass of information they receive daily.
  • Cognitive biases include confirmation bias, anchoring bias, bandwagon effect, and negativity bias.

When I was a kid, I was enamored of cigarette-smoking movie stars. When I was a teenager , some of my friends began to smoke; I wanted to smoke too, but my parents forbid it. I was also intimidated by the ubiquitous anti-smoking commercials I saw on television warning me that smoking causes cancer. As much as I wanted to smoke, I was afraid of it.

When I started college as a pre-med major, I also started working in a hospital emergency room. I was shocked to see that more than 90% of the nurses working there were smokers, but that was not quite enough to convince me that smoking was OK. It was the doctors: 11 of the 12 emergency room physicians I worked with were smokers. That was all the convincing I needed. If actual medical doctors thought smoking was safe, then so did I. I started smoking without concern because I had fallen prey to an authority bias , which is a type of cognitive bias. Fortunately for my health, I wised up and quit smoking 10 years later.

It's Likely You're Unaware of These Habits

Have you ever thought someone was intelligent simply because they were attractive? Have you ever dismissed a news story because it ran in a media source you didn’t like? Have you ever thought or said, “I knew that was going to happen!” in reference to a team winning, a stock going up in value, or some other unpredictable event occurring? If you replied "yes” to any of these, then you may be guilty of relying on a cognitive bias.

In my last post, I wrote about the importance of critical thinking, and how in today’s information age, no one has an excuse for living in ignorance. Since then, I recalled a huge impediment to critical thinking: cognitive bias. We are all culpable of leaning on these mental crutches, even though we don’t do it intentionally.

What Are Cognitive Biases?

The Cambridge English Dictionary defines cognitive bias as the way a particular person understands events, facts, and other people, which is based on their own particular set of beliefs and experiences and may not be reasonable or accurate.

PhilosophyTerms.com calls it a bad mental habit that gets in the way of logical thinking.

PositivePsychology.com describes it this way: “We are often presented with situations in life when we need to make a decision with imperfect information, and we unknowingly rely on prejudices or biases.”

And, according to Alleydog.com, a cognitive bias is an involuntary pattern of thinking that produces distorted perceptions of people, surroundings, and situations around us.

In brief, a cognitive bias is a shortcut to thinking. And, it’s completely understandable; the onslaught of information that we are exposed to every day necessitates some kind of time-saving method. It is simply impossible to process everything, so we make quick decisions. Most people don’t have the time to thoroughly think through everything they are told. Nevertheless, as understandable as depending on biases may be, it is still a severe deterrent to critical thinking.

Here's What to Watch Out For

Wikipedia lists 197 different cognitive biases. I am going to share with you a few of the more common ones so that in the future, you will be aware of the ones you may be using.

Confirmation bias is when you prefer to attend media and information sources that are in alignment with your current beliefs. People do this because it helps maintain their confidence and self-esteem when the information they receive supports their knowledge set. Exposing oneself to opposing views and opinions can cause cognitive dissonance and mental stress . On the other hand, exposing yourself to new information and different viewpoints helps open up new neural pathways in your brain, which will enable you to think more creatively (see my post: Surprise: Creativity Is a Skill, Not a Gift! ).

Anchoring bias occurs when you become committed or attached to the first thing you learn about a particular subject. A first impression of something or someone is a good example (see my post: Sometimes You Have to Rip the Cover Off ). Similar to anchoring is the halo effect , which is when you assume that a person’s positive or negative traits in one area will be the same in some other aspect of their personality . For example, you might think that an attractive person will also be intelligent without seeing any proof to support it.

selection bias in critical thinking

Hindsight bias is the inclination to see some events as more predictable than they are; also known as the “I knew it all along" reaction. Examples of this bias would be believing that you knew who was going to win an election, a football or baseball game, or even a coin toss after it occurred.

Misinformation effect is when your memories of an event can become affected or influenced by information you received after the event occurred. Researchers have proven that memory is inaccurate because it is vulnerable to revision when you receive new information.

Actor-observer bias is when you attribute your actions to external influences and other people's actions to internal ones. You might think you missed a business opportunity because your car broke down, but your colleague failed to get a promotion because of incompetence.

False consensus effect is when you assume more people agree with your opinions and share your values than actually do. This happens because you tend to spend most of your time with others, such as family and friends, who actually do share beliefs similar to yours.

Availability bias occurs when you believe the information you possess is more important than it actually is. This happens when you watch or listen to media news sources that tend to run dramatic stories without sharing any balancing statistics on how rare such events may be. For example, if you see several stories on fiery plane crashes, you might start to fear flying because you assume they occur with greater frequency than they actually do.

Bandwagon effect, also known as herd mentality or groupthink , is the propensity to accept beliefs or values because many other people also hold them as well. This is a conformity bias that occurs because most people desire acceptance, connection, and belonging with others, and fear rejection if they hold opposing beliefs. Most people will not think through an opinion and will assume it is correct because so many others agree with it.

Authority bias is when you accept the opinion of an authority figure because you believe they know more than you. You might assume that they have already thought through an issue and made the right conclusion. And, because they are an authority in their field, you grant more credibility to their viewpoint than you would for anyone else. This is especially true in medicine where experts are frequently seen as infallible. An example would be an advertiser showing a doctor, wearing a lab coat, touting their product.

Negativity bias is when you pay more attention to bad news than good. This is a natural bias that dates back to humanity’s prehistoric days when noticing threats, risks, and other lethal dangers could save your life. In today’s civilized world, this bias is not as necessary (see my post Fear: Lifesaver or Manipulator ).

Illusion of control is the belief that you have more control over a situation than you actually do. An example of this is when a gambler believes he or she can influence a game of chance.

Understand More and Communicate Better

Learning these biases, and being on the alert for them when you make a decision to accept a belief or opinion, will help you become more effective at critical thinking.

Source: Cognitive Bias Codex by John Manoogian III/Wikimedia Commons

Robert Wilson is a writer and humorist based in Atlanta, Georgia.

  • Find a Therapist
  • Find a Treatment Center
  • Find a Psychiatrist
  • Find a Support Group
  • Find Teletherapy
  • United States
  • Brooklyn, NY
  • Chicago, IL
  • Houston, TX
  • Los Angeles, CA
  • New York, NY
  • Portland, OR
  • San Diego, CA
  • San Francisco, CA
  • Seattle, WA
  • Washington, DC
  • Asperger's
  • Bipolar Disorder
  • Chronic Pain
  • Eating Disorders
  • Passive Aggression
  • Personality
  • Goal Setting
  • Positive Psychology
  • Stopping Smoking
  • Low Sexual Desire
  • Relationships
  • Child Development
  • Therapy Center NEW
  • Diagnosis Dictionary
  • Types of Therapy

March 2024 magazine cover

Understanding what emotional intelligence looks like and the steps needed to improve it could light a path to a more emotionally adept world.

  • Coronavirus Disease 2019
  • Affective Forecasting
  • Neuroscience
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

selection bias in critical thinking

Home Audience

Selection Bias: What it is, Types & Examples

Aim to avoid selection bias and excel at sampling without affecting the validity of your data. Learn more about it.

Researchers may need help with findings that don’t match the realities of the target community. There are numerous causes, but selection bias is the most important. It occurs when the study sample needs to accurately represent the population of interest, resulting in variations in the research results.

Understanding selection bias, its practical impacts, and the best ways to avoid it will help you deal with its effects. Everything you need to know about how to enhance your data collection process will be covered in this post.

What is Selection Bias?

Selection bias refers to experimental mistakes that lead to an inaccurate representation of your research sample. It arises when the participant pool or data does not represent the target group.

A significant cause of selection bias is when the researcher fails to consider subgroup characteristics. It causes fundamental disparities between the sample data variables and the research population.

Selection bias arises in research for several reasons. If the researcher chooses the sample population using the incorrect criteria, they may find numerous examples of this bias. It may also happen due to elements affecting study volunteers’ willingness to participate.

All statistical models in the learning sciences require data. Good data is crucial to developing a statistically valid set of models, but it’s surprisingly easy to get insufficient information. Selection bias affects researchers at all process stages, from data collection to analysis.

For instance, researchers may need to realize that their findings may not apply to other people or different settings. This type of error presents individuals randomly assigned to one of two or more groups, yet, only some people who can be enrolled actually participate. 

This means that people considered suitable candidates for a particular program may or may not choose to participate. Thus, those who do participate in the program may have different characteristics than those who do not. The existence of the non-random selection process can lead to incorrect inferences about causation and statistics related thereto, as well as invalidation of gathered data. 

We have published a blog that talks about  subgroup analysis ; why don’t you check it out for more ideas?

Selection Bias Types

There are many types of selection bias, each and every one of them impacting the validity of your data in a specific way. Let’s go over some of the most common ones:

  • Sampling Bias:

Sampling bias is a form of selection bias that occurs when we don’t collect data from all the people who could be in our population on a crucial variable. Some of the reasons for this could be that the researcher gathers their sample mostly from convenience or convenience sampling , or sometimes by carefully selecting individuals who are similar and have similar characteristics to study subjects but have yet to be randomly chosen from their population. 

This can skew any statistical analysis and understanding of the results in that particular case

Read more: Bias in Research by QuestionPro 

  • Self-selection Bias:

This type of selection bias, also known as “Volunteer bias,” occurs when people who choose to participate in a study are not representative of the larger population of interest. For example, if you want to study student preferences for careers, you may only be able to attract students from schools known for attracting wealthy students. Volunteer bias may also occur when a study examines people of a certain race but doesn’t have enough participants who identify as members of that race.

Like any other form of bias, self-selection bias distorts the data gathered in research. In most cases, the researcher will end up with highly inaccurate results and the non-existing validity of systematic research. 

  • Nonresponse bias

Nonresponse bias happens when people don’t answer a survey or participate in a research project. It often happens in survey research when participants lack the appropriate abilities, lack time, or feel guilt or shame about the topic.

For Example, Researchers are interested in how computer scientists view a new piece of software. They conducted a survey and found many computer scientists didn’t respond or finish.

Researchers found that the respondents believed the software was excellent and high-quality after receiving the data. However, they discovered that they received mainly unfavorable criticism after releasing the new software to the full population of computer scientists.

The survey participants were entry-level computer scientists who couldn’t spot program flaws. The survey respondents did not reflect the more significant computer scientist population. Hence the results were inaccurate.

  • Exclusion Bias:

Inclusion bias happens when the researcher intentionally includes some subgroups in the sample population. It is closely related to non-response sampling bias and affects the internal validity of your systematic investigation.

Experts define inclusion bias as “the collective term covering the various potential biases that can result from the post-randomization inclusion of patients in a trial and subsequent analyses.” When this happens, your research outcomes may establish a false connection between variables.

Exclusion bias occurs when you intentionally exclude some subgroups from the sample population before randomizing them into groups. You may have excluded patients with certain conditions, such as cancer or HIV/AIDS, because it would have been unethical to study those people without their consent. Or, maybe you excluded them because you didn’t want to give them access to another treatment option during their clinical trial. Some researchers also choose not to include people who are too ill or too old for participation in clinical trials (because these people might not be able to participate effectively or might not receive enough benefit from participating).

  • Recall Bias:

One of the most common forms of recall bias is retroactive memory distortion. Retroactive memory distortion occurs when people remember events and experiences in a way that suits their current needs rather than their original purpose. For example, someone might recall an event as having been a positive experience or even enjoyable if it was meant to be negative. In addition, retroactive memory distortion can occur when people have difficulty remembering details that are important to the research topic, such as facts about their own lives or the lives of others.

Retroactive memory distortion can also occur when people include inaccurate information in their recall reports. This happens when they report something that never happened or something that happened at a different time than when it actually occurred. 

For example, a person might report that he spent five hours traveling from work to home on a particular day when in reality, it only took him three hours because he had lunch at his desk beforehand and forgot about it until later in the day.

  • Survivorship bias

Survivorship bias occurs when a researcher subjects variables to a screening contest and selects those who successfully complete the procedure. This preliminary selection method eliminates failed variables because of their lack of visibility.

Survivorship bias focuses on the most successful factors, even if they don’t have relevant data. It can alter your research outcomes and lead to unnecessarily positive views that don’t reflect reality.

Suppose you’re researching entrepreneur success variables. Most famous entrepreneurs didn’t finish college. It could make you assume that leaving college with a strong concept is enough to launch a career. But the majority of college dropouts don’t end up rich.

Actuality, many more people dropped out of college to launch unsuccessful businesses. In this example, survivorship bias occurs when you only pay attention to dropouts who succeeded and ignore the vast majority of dropouts who failed.

  • Attrition bias

Attrition bias occurs when some survey respondents drop out while it is still being conducted. As a result, there are many unknowns in your research findings, which lowers the quality of the conclusions.

Most of the time, the researcher looks for trends among the drop-out variables. If you can identify these tendencies, you might be able to determine why the respondents left your survey suddenly and take appropriate action.

  • Undercoverage bias

Undercoverage bias arises when a representative sample is drawn from a smaller proportion of the target population. Online surveys are especially vulnerable to undercoverage bias.

In an online survey on self-reported health, let’s say you focus on excessive drinking and smoking behaviors. Although, because of your way of conducting the survey, you are deliberately excluding people who don’t use the internet.

This way, older and less educated individuals are left out of your sample. Since internet users and non-users differ significantly, you can’t draw reliable results from your online survey.

How to Avoid Selection Bias

Estimating the strength of a relationship between an outcome (the dependent variable) and several predictor variables is essential to many research questions. Bivariate analysis and multi- regression analysis methods are commonly used to avoid selection bias. 

Bivariate analysis is a quantitative analysis often used to determine the empirical relationship between two variables. In this method, researchers measure each predictor variable individually and then apply statistical tests to determine whether it affects the outcome variable.

If there is no relationship between the predictor variables and the outcome, then they will not be able to find any evidence of selection bias in their data collection process. However, if there is some sort of relationship between these variables, then it may be possible that there was some level of selection bias present when collecting this data.

Multi-regression methods allow researchers to assess the strength of this relationship between an outcome (the dependent variable) and several predictor variables.

There’s a good chance you affected your survey results through selection bias. Review the following advice to help you avoid selection bias:

During survey design

Try some of these suggestions to avoid selection bias when you are developing the structure for your survey:

  • Make sure that your survey objectives are apparent.
  • Specify the standards that should be met for your intended audience .
  • Allow every possible participant a fair opportunity to take part in the survey.

During sampling

Consider putting some of these strategies into practice during the process of selecting samples:

  • When employing random sampling in your processes, ensure proper randomization.
  • Be sure that your list of participants is up to date and accurately represents the intended audience.
  • Make sure that the subgroups represent the population as a whole and share the essential factors.

During evaluation

When going through the evaluation and validation process, you need to think about putting some of these ideas into action to avoid selection bias:

  • If you want to ensure that your sample selection, procedure, and data collection are free of bias, having a second researcher look over your back is a good idea.
  • Apply technology to monitor how the data changes so you may identify unexpected outcomes and investigate quickly to repair or avoid inaccurate data.
  • Check previous fundamental research data trends to verify if your research is on track for strong internal validity.
  • Invite the people who didn’t answer the survey to an additional one. A second round might yield more votes for a clearer understanding of the findings.

Learn how to avoid selection bias with this quick Audience by QuestionPro video !

What are the impacts of selection bias?

There is always the possibility of random or systematic errors in research that compromise the reliability of research outcomes. Selection bias can have various impacts, and it’s often hard to tell how significant or in which direction those effects are. The impacts can lead to several issues for businesses, including the following:

  • Risk of losing revenue and reputation

For business planning and strategy, insights obtained from non-representative samples are significantly less helpful because they don’t align with the target population. There is a risk of losing money and reputation if business decisions are based on these findings.

  • Impacts the external validity of the analysis

Research becomes less trustworthy as a result of inaccurate data. Therefore, the analysis’s external validity compromises because of the biased sample.

  • This leads to inappropriate business decisions

If the final results are biased and unrepresentative of the topic, it is unsafe to rely on the study’s findings when making important business decisions.

Understanding selection bias, its types, and how it affects research outcomes is the beginning step in working with it. We’ve discovered crucial data that will help in identifying it and working to reduce its impacts to a minimum. You can avoid selection bias by using QuestionPro to gather reliable research data.

Various situations can result in selection bias, such as when non-neutral samples are combined with system problems. An enterprise-grade research tool to use in research and alter experiences is the QuestionPro research suite.

QuestionPro Audience can help you collect valuable data from your ideal sample.

When conducting research, it’s essential to understand the nature of selection bias. This is the tendency for your research results to be influenced by the characteristics of your participants or sample .

If you’re conducting a study on the effects of sugar on diabetes, for example, and you have a group of people with diabetes who are all members of your church, that could be a source of selection bias. They may be more likely to participate in church activities than those who don’t have diabetes, therefore, more likely to find themselves in the sample.

If you want to avoid this kind of bias in your study, you should collect data from a wide variety of reliable sources with QuestionPro Audience 

LEARN MORE         FREE TRIAL

MORE LIKE THIS

A/B testing software

Top 13 A/B Testing Software for Optimizing Your Website

Apr 12, 2024

contact center experience software

21 Best Contact Center Experience Software in 2024

Government Customer Experience

Government Customer Experience: Impact on Government Service

Apr 11, 2024

Employee Engagement App

Employee Engagement App: Top 11 For Workforce Improvement 

Apr 10, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

Critical thinking

We’ve already established that information can be biased. Now it’s time to look at our own bias.

Studies have shown that we are more likely to accept information when it fits into our existing worldview, a phenomenon known as confirmation or myside bias (for examples see Kappes et al., 2020 ; McCrudden & Barnes, 2016 ; Pilditch & Custers, 2018 ). Wittebols (2019) defines it as a “tendency to be psychologically invested in the familiar and what we believe and less receptive to information that contradicts what we believe” (p. 211). Quite simply, we may reject information that doesn’t support our existing thinking.

This can manifest in a number of ways with Hahn and Harris (2014) suggesting four main behaviours:

  • Searching only for information that supports our held beliefs
  • Failing to critically evaluate information that supports our held beliefs - accepting it at face value - while explaining away or being overly critical of information that might contradict them
  • Becoming set in our thinking, once an opinion has been formed, and deliberately ignoring any new information on the topic
  • A tendency to be overconfident with the validity of our held beliefs.

Peters (2020) also suggests that we’re more likely to remember information that supports our way of thinking, further cementing our bias. Taken together, the research suggests that bias has a huge impact on the way we think. To learn more about how and why bias can impact our everyday thinking, watch this short video.

Filter bubbles and echo chambers

The theory of filter bubbles emerged in 2011, proposed by an Internet activist, Eli Pariser. He defined it as “your own personal unique world of information that you live in online” ( Pariser, 2011, 4:21 ). At the time that Pariser proposed the filter bubble theory, he focused on the impact of algorithms, connected with social media platforms and search engines, which prioritised content and personalised results based on the individuals past online activity, suggesting “the Internet is showing us what it thinks we want to see, but not necessarily what we should see” (Pariser, 2011, 3:47. Watch his TED talk if you’d like to know more).

Our understanding of filter bubbles has now expanded to recognise that individuals also select and create their own filter bubbles. This happens when you seek out likeminded individuals or sources; follow your friends or people you admire on social media; people that you’re likely to share common beliefs, points-of-view, and interests with. Barack Obama (2017) addressed the concept of filter bubbles in his presidential farewell address:

For too many of us it’s become safer to retreat into our own bubbles, whether in our neighbourhoods, or on college campuses, or places of worship, or especially our social media feeds, surrounded by people who look like us and share the same political outlook and never challenge our assumptions… Increasingly we become so secure in our bubbles that we start accepting only information, whether it’s true or not, that fits our opinions, instead of basing our opinions on the evidence that is out there. ( Obama, 2017, 22:57 ).

Filter bubbles are not unique to the social media age. Previously, the term echo chamber was used to describe the same phenomenon in the news media where different channels exist, catering to different points of view. Within an echo chamber, people are able to seek out information that supports their existing beliefs, without encountering information that might challenge, contradict or oppose.

Other forms of bias

There are many different ways in which bias can affect the way you think and how you process new information. Try the quiz below to discover some additional forms of bias, or check out Buzzfeed’s 2017 article on cognitive bias.

Kendall College of Art & Design

Critical Thinking & Evaluating Information

  • Critical Thinking Skills
  • Critical Thinking Questions
  • Fake News & Misinformation
  • Checkers & Cheat Sheets
  • Evaluate Using T.R.A.A.P.
  • Alternate Videos
  • Sources & Links

What is Bias?

                Sources of bias image bubble

Biases also play a role in how you approach all information. The short video below provides definitions of 12 types of cognitive biases.

There are two forms of bias of particular importance given today's information laden landscape, implicit bias and confirmation bias .

Implicit Bias & Confirmation Bias

Implicit / Unconscious Bias 

"Original definition (neutral) - Any personal preference, attitude, or expectation that unconsciously affects a person's outlook or behaviour.

Current definition (negative) - Unconscious favouritism towards or prejudice against people of a particular race, gender, or group that influences one's actions or perceptions; an instance of this."

"unconscious bias, n." OED Online, Oxford University Press, December 2020, www.oed.com/view/Entry/88686003 .

"Thoughts and feelings are “implicit” if we are unaware of them or mistaken about their nature. We have a bias when, rather than being neutral, we have a preference for (or aversion to) a person or group of people. Thus, we use the term “implicit bias” to describe when we have attitudes towards people or associate stereotypes with them without our conscious knowledge." 

https://perception.org/research/implicit-bias/

Confirmation Bias – "Originating in the field of psychology; the tendency to seek or favour new information which supports one’s existing theories or beliefs, while avoiding or rejecting that which disrupts them." 

Addition of definition to the Oxford Dictionary in 2019 

"confirmation, n." OED Online, Oxford University Press, December 2020, www.oed.com/view/Entry/38852. 

Simply put, confirmation bias is the tendency to seek out and/ or interpret new information as confirmation of one's existing beliefs or theories and to exclude contradictory or opposing information or points of view.

Put Bias in Check!

                Who, what, when, where, why, how blocks image

Now that you are aware of bias, your personal biases and bias that can be found in sources of information, you can put it in check . You should approach information objectively, neutrally and critically evaluate it. Numerous tools included in this course can help you do this, like the critical thinking cheat sheet in the previous module.

  • << Previous: Critical Thinking Questions
  • Next: Evaluating News & Media >>
  • Last Updated: Sep 9, 2021 12:09 PM
  • URL: https://ferris.libguides.com/criticalthinking

Ferris State University Imagine More

psychology

Selection bias refers to a type of bias that occurs when the participants or subjects chosen for a study are not representative of the target population of interest, leading to flawed or skewed results.

Explanation

Selection bias can arise during the process of selecting individuals for a study sample, where the criteria used for inclusion or exclusion may inadvertently introduce biases. This bias can distort the findings and conclusions of a study, making them less applicable or generalizable to the larger population.

Types of Selection Bias

There are various types of selection bias that can occur:

1. Sampling Bias

Sampling bias occurs when the selection of study participants is not random, leading to an unrepresentative sample. This can happen if the selection process favors certain characteristics or groups of individuals, excluding others.

2. Volunteer Bias

Volunteer bias refers to the bias introduced when participants self-select to be a part of a study. This can result in a sample that is more motivated, cooperative, or different in some relevant way from the general population, affecting the overall validity of the results.

3. Healthy User Bias

Healthy user bias arises when participants in a study are healthier or have healthier habits than the general population. This bias can occur if the study requires individuals to be healthy at the outset, leading to findings that may not be applicable to the broader population.

4. Berkson’s Bias

Berkson’s bias occurs when the selection of study participants is based on certain hospital admissions or clinic visits, which may lead to an artificial association between diseases or conditions that do not really exist in the general population.

Consequences of Selection Bias

The presence of selection bias can have several consequences:

1. Inaccurate Findings

Selection bias can lead to findings that are not representative of the target population, resulting in inaccurate estimates of effects or associations between variables.

2. Lack of Generalizability

When the sample chosen for a study does not reflect the larger population, the findings cannot be easily extrapolated or generalized to the broader context. This limits the applicability and external validity of the study.

3. Invalid Conclusions

Selection bias can undermine the validity of conclusions drawn from a study. Flawed results may lead to incorrect interpretations and recommendations.

4. Wasted Resources

Conducting research with biased samples wastes valuable resources such as time, effort, and funding. Biased findings may not contribute meaningfully to scientific knowledge or inform decision-making.

Prevention and Mitigation

To minimize selection bias, researchers should take appropriate measures:

1. Random Sampling

Implement random sampling techniques to ensure representative selection of participants, reducing the potential for biased samples.

2. Clear Inclusion Criteria

Define clear and objective inclusion and exclusion criteria to minimize subjective judgment in participant selection.

3. Enhance Participation Rates

Efforts should be made to enhance participation rates to minimize volunteer bias. Strategies like incentivizing participation or reaching out to a diverse range of potential participants can be helpful.

4. Transparent Reporting

Clearly report the details of participant selection, criteria, and any potential limitations or biases in the study to allow readers to assess the generalizability and quality of the findings.

5. Utilize Multiple Data Sources

When possible, researchers should gather data from various sources and compare findings to mitigate the impact of selection bias and increase the robustness of the results.

  • What is New
  • Download Your Software
  • Behavioral Research
  • Software for Consumer Research
  • Software for Human Factors R&D
  • Request Live Demo
  • Contact Sales

Sensor Hardware

Man wearing VR headset

We carry a range of biosensors from the top hardware producers. All compatible with iMotions

iMotions for Higher Education

Imotions for business.

selection bias in critical thinking

Neuroarchitecture: Designing Spaces with Our Brain in Mind

  • Sensory and Perceptual

Morten Pedersen

selection bias in critical thinking

Human Factors in Automotive Human-Machine Interface (HMI) Design

Consumer Insights

News & Events

  • iMotions Lab
  • iMotions Online
  • Eye Tracking
  • Eye Tracking Screen Based
  • Eye Tracking VR
  • Eye Tracking Glasses
  • FEA (Facial Expression Analysis)
  • Voice Analysis
  • EDA/GSR (Electrodermal Activity)
  • EEG (Electroencephalography)
  • ECG (Electrocardiography)
  • EMG (Electromyography)
  • Respiration
  • iMotions Lab: New features
  • iMotions Lab: Developers
  • EEG sensors
  • Consumer Inights
  • Human Factors R&D
  • Work Environments, Training and Safety
  • Customer Stories
  • Published Research Papers
  • Document Library
  • Customer Support Program
  • Help Center
  • Release Notes
  • Contact Support
  • Partnerships
  • Mission Statement
  • Ownership and Structure
  • Executive Management
  • Job Opportunities

Research Fundamentals October 20, 2020

What is Selection Bias? (And How to Defeat it)

selection bias in critical thinking

Bryn Farnsworth

Good research begins well before the first experiment starts.

During World War II, a statistician by the name of Abraham Wald was given a rather unexpected job, given his background: of improving the survival rate of US aircraft. Wald was a smart man and looked over the prior analyses that had been done. The previous investigators had seen the damage and destruction dealt to the aircrafts, and advised more armor being added to the most damaged areas, to increase their protection. Specific parts were shot and torn up, so new armor was added.

Yet the survivability rate didn’t increase. In fact, it decreased, as the new armor added weight and reduced the agility of the planes, and they still arrived back with damage in the same areas. Wald observed all of this and advised that the airforce start adding armor only to the untouched areas – the parts without a trace of damage. He reasoned that the only data about survivability was coming from the surviving planes themselves; the ones that came back with damage showed exactly where the non-lethal blows could be dealt.

With the advice taken onboard, the survivability increased and the rest is, well, history . While this makes for a great example of lateral thinking, it also tells us something critical about data collection – that of selection bias.

Selection bias is an experimental error that occurs when the participant pool, or the subsequent data, is not representative of the target population.

There are several types of selection bias, and most can be prevented before the results are delivered. Although there might not always be an entire airforce on the line when it comes to getting it right, it’s still essential for good research.

Let’s go through some examples, and explore what can be done to stop this bias occurring before the first data point is even collected.

Sampling Bias

There are several aspects of sampling bias, all of which ultimately mean that the population being studied does not provide the data that we require to make conclusions.

A common example of this happening in practice is through self-selection . Specific groups of people may be drawn to taking part in a particular study because of self-selecting characteristics . It is known that individuals who are inclined to sensation-seeking , or thrill-seeking are more likely to take part in certain studies, which could skew data from a study if it is examining those personality traits (and possibly within other studies too).

selection bias

The best way around this bias is to draw from a sample that is not self-selecting. This may not always be possible of course, due to experimental constraints (particularly for studies requiring volunteers), but particular effort should be made to avoid the potential for this bias when examining different personality types. The effects of this bias are unlikely to be so detrimental if the experiment is concerned with something more constant, such as psychophysiological measurements.

Pre-screening

Another pitfall that experimenter’s can fall into is to pre-screen participants. There can be good reasons to do so (for example, to ensure correct control groups), but this can also have the effect of distorting the sample population. As a consequence this could result in selecting participants that share a common characteristic that will affect the results.

This is similar to self-selection in outcome, but is lead by the researcher (and usually with good intentions). To avoid this, a double-blind experiment may be necessary where participant screening has to be performed, meaning that the choices are made by an individual who is independent of the research goals (which also avoids experimenter bias).

Participant attrition

The sample can also be affected by the experimental setup while it’s in action. If participants drop out of the study in a biased way – if there is a non-random reason why this is occurring – then the remaining participants are unlikely to be representative of the original sample pool (never mind the population at large).

This drop out rate is known as participant attrition , and is most commonly seen within investigations where there is an ongoing intervention with several measurements. For example, a medical trial may see numerous participants exit the study if the medicine doesn’t appear to be working (or is making them ill). In this way, only the remaining (or surviving, in Wald’s case above) participants will be investigated at the end of the experiment.

It’s therefore important that participants who drop out of the study are followed up after doing so, in order to determine whether their attrition is due to a common factor with other participants, or for reasons external to the experiment.

Undercover / classified

It should come as no surprise that having too few participants will limit the strength of the conclusions that can be made (if they can be made at all), yet many studies do suffer from undercoverage of sample groups.

It is therefore critical that enough participants are available and selected beforehand. This can be calculated in advance, allowing you to plan the study accordingly. If too many participants also drop out due to attrition, then the study may need to be repeated.

A further point to note is that even if you have enough participants, you need to make sure that they’re classified correctly, and put into the right experimental group. Carrying out a study on bilinguals and monolinguals would of course be hampered if it came to light that some participants spoke one more (or less) languages than their grouping would suggest.

This is particularly pertinent in studies examining different mental disorders, in which the grouping definition could be unclear. For example, studies of anxiety may need to differentiate between participants who have been diagnosed with a generalized anxiety disorder or those who suffer from panic attacks, and even if participants exhibit subclinical/prodromal symptoms.

Ensuring that the sample is well-defined, and well-characterized, before beginning the study will, therefore, ensure that the findings are relevant to the group of interest.

Picking cherries, dredging data

While most of the selection biases occur before the data has been collected, there are several steps that occur post-hoc which are open to erroneous distortion. These steps instead to relate to how the data, rather than the sample, is selected.

cherry picking data

Cherry-picking is undoubtedly a good way to prepare a pie, but is also the phrase given to the act of only selecting data that conforms with what the experimenter is expecting, or hoping, to see.

This can occur due to malpractice , or perhaps wishful thinking on behalf of the investigator. Ultimately though, this leads to bad science either way. The investigator must remain open-minded to the contents of the data, and question how they are interpreting things. It may also help if several people (ideally independent) check the data of the study.

Similar to the above, data-dredging (also known as fishing for data, or p-hacking ) is the practice of only considering the data that is significant after the experiment, and inventing post-hoc conclusions for why that emerged. This usually arises when a large number of variables investigated, and spurious results can appear significant.

By taking only significant variables from a dataset, this is essentially the same as running the same experiment multiple times, and publishing the one occurrence in which significant differences were found.

Experimental reproducibility is a particularly important tenet of science that should be maintained when there is a possibility of data-dredging. With enough replications , the research will be shown to be true or false.

Trick splits

Finally, in a similar way to misclassifying the participants before the experiment, their data can be misclassified after the fact. Incorrect partitioning of data is a way of dividing, or not using, certain parts of the data based on false assumptions.

This veers quite strongly into fraudulent data manipulation, but it can also occur for reasons due to technical errors, rather than through intentional malpractice.

In addition to the steps above, there are a few ways in which using iMotions for data collection implicitly guards against some trappings of selection bias, particularly after data collection has taken place.

Using multiple data sources, as with multiple biosensors, can provide another way in which to check your data, by observing if the recordings are in agreement with each other. For example, using both GSR and ECG can help you confirm the levels of physiological arousal, while facial expression analysis can complement survey testing (if someone appears unhappy while claiming the opposite in the survey, then this could be reason for caution with their data). These measures can ultimately give you more confidence in the data that is collected.

Furthermore, being able to view the data recorded in real time in a graphical and intuitive format decreases the chance of being misled by the numbers alone. A spreadsheet of endless numbers can offer almost as many opportunities to find confusion, yet having the real data displayed in an easily understood format provides clarity for investigation.

How to fix everything by not keeping secrets

The use of iMotions largely helps protect against the data selection bias, yet the selection of participants is something that primarily relies on good experimental design .

While the attempts to fix the emergence of sampling biases may not always be completely feasible, there is one central thing that can be done to stem the bias – be clear with the results. When stating findings, it’s important to be transparent with whom the results are applicable to.

In our article about participant bias we talked about how the internal validity of the experiment could be problematic, as the results would appear to be correct, yet would actually be biased. For selection bias however, we find that external validity is a more likely culprit – the results appear to be applicable to the population at large, yet are actually biased and invalid for such generalizations.

For experimental integrity, it’s therefore important that the participant information, the data analysis, and the resulting conclusions are made as open and clear as can be.

If you want to know more about biases in research, or would like to learn about how iMotions can help your research, then feel free to contact us .

I hope you’ve enjoyed reading about how to avoid selection bias in research. If you want more tips and tricks for great research, then check out our free pocket guide for experimental design below.

Free 44-page Experimental Design Guide

For Beginners and Intermediates

  • Introduction to experimental methods
  • Respondent management with groups and populations
  • How to set up stimulus selection and arrangement

selection bias in critical thinking

Learn how to avoid bias in research

selection bias in critical thinking

Experimental Design: The Complete Pocket Guide

selection bias in critical thinking

What is Researcher Bias? (And How to Defeat it)

selection bias in critical thinking

What is Bias? [A Field Guide for Scientific Research]

selection bias in critical thinking

What is Participant Bias? (And How to Defeat it)

Last edited

About the author

See what is next in human behavior research

Follow our newsletter to get the latest insights and events send to your inbox.

Related Posts

selection bias in critical thinking

Scientific Method

selection bias in critical thinking

Mixed Methods Research

selection bias in critical thinking

Introduction to Nudge Theory

selection bias in critical thinking

The Best Neuroscience Software

Research insights.

selection bias in critical thinking

The History of EEG

selection bias in critical thinking

Emotion Detection Software: A Comprehensive Guide by iMotions

selection bias in critical thinking

Understanding Emotional Prosody: Unveiling Human Emotion in Speech

Publications.

Case Stories

Explore Blog Categories

Best Practice

Collaboration, product guides, product news, research fundamentals.

Read publications made possible with iMotions

See Publications

Get inspired and learn more from our expert content writers

A monthly close up of latest product and research news

🍪 Use of cookies

We are committed to protecting your privacy and only use cookies to improve the user experience.

Chose which third-party services that you will allow to drop cookies. You can always change your cookie settings via the Cookie Settings link in the footer of the website. For more information read our Privacy Policy.

  • gtag This tag is from Google and is used to associate user actions with Google Ad campaigns to measure their effectiveness. Enabling this will load the gtag and allow for the website to share information with Google.
  • Livechat Livechat provides you with direct access to the experts in our office. The service tracks visitors to the website but does not store any information unless consent is given. This service is essential and can not be disabled.
  • Pardot Collects information such as the IP address, browser type, and referring URL. This information is used to create reports on website traffic and track the effectiveness of marketing campaigns.
  • Third-party iFrames Allows you to see thirdparty iFrames.

Bias and Critical Thinking

Note: The German version of this entry can be found here: Bias and Critical Thinking (German)

Note: This entry revolves more generally around Bias in science. For more thoughts on Bias and its relation to statistics, please refer to the entry on Bias in statistics .

In short: This entry discusses why science is never objective, and what we can really know.

  • 1 What is bias?
  • 2 Design criteria
  • 3 Bias in gathering data, analysing data and interpreting data
  • 4 Bias and philosophy
  • 5 Critical Theory and Bias
  • 6 Further Information

What is bias?

"The very concept of objective truth is fading out of the world." - George Orwell

A bias is “the action of supporting or opposing a particular person or thing in an unfair way, because of allowing personal opinions to influence your judgment” (Cambridge Dictionary). In other words, bias clouds our judgment and often action in the sense that we act wrongly. We are all biased, because we are individuals with individual experiences, and are unconnected from other individuals and/or groups, or at least think we are unconnected.

Recognising bias in research is highly relevant, because bias exposes the myth of objectivity of research and enables a better recognition and reflection of our flaws and errors. In addition, one could add that understanding bias in science is relevant beyond the empirical, since bias can also highlight flaws in our perceptions and actions as humans. To this end, acknowledging bias is understanding the limitations of oneself. Prominent examples are gender bias and racial bias, which are often rooted in our societies, and can be deeply buried in our subconscious. To be critical researchers it is our responsibility to learn about the diverse biases we have, yet it is beyond this text to explore the subjective human bias we need to overcome. Just so much about the ethics of bias: many would argue that overcoming our biases requires the ability to learn and question our privileges. Within research we need to recognise that science has been severely and continuously biased against ethnic minorities, women, and many other groups. Institutional and systemic bias are part of the current reality of the system, and we need to do our utmost to change this - there is a need for debiasing science, and our own actions. While it should not go unnoticed that institutions and systems did already change, injustices and inequalities still exist. Most research is conducted in the global north, posing a neo-colonialistic problem that we are far from solving. Much of academia is still far away from having a diverse understanding of people, and systemic and institutional discrimination are parts of our daily reality. We are on the path of a very long journey, and there is much to be done concerning bias in constructed institutions.

All this being said, let us shift our attention now to bias in empirical research. Here, we show three different perspective in order to enable a more reflexive understanding of bias. The first is the understanding how different forms of biases relate to design criteria of scientific methods. The second is the question which stage in the application of methods - data gathering, data analysis, and interpretation of results - is affected by which bias, and how. Finally, the third approach is to look at the three principal theories of Western philosophy, namely reason, social contract and utilitarianism - to try and dismantle which of the three can be related to which bias. Many methods are influenced by bias, and recognising which bias affects which design criteria, research stage and principal philosophical theory in the application of a method can help to make empirical research more reflexive.

selection bias in critical thinking

Design criteria

While qualitative research is often considered prone to many biases, it is also often more reflexive in recognising its limitations. Many qualitative methods are defined by a strong subjective component - i.e. of the researcher - and a clear documentation can thus help to make an existing bias more transparent. Many quantitative approaches have a reflexive canon that focuses on specific biases relevant for a specific approach, such as sampling bias or reporting bias. These are often less considered than in qualitative methods, since quantitative methods are still – falsely - considered to be more objective. This is not true. While one could argue that the goal of reproducibility may lead to a better taming of a bias, this is not necessarily so, as the crisis in psychology clearly shows. Both quantitative and qualitative methods are potentially strongly affected by several cognitive biases, as well as by bias in academia in general, which includes for instance funding bias or the preference of open access articles. While all this is not surprising, it is still all the much harder to solve.

Another general differentiation can be made between inductive and deductive approaches. Many deductive approaches are affected by bias that is associated to sampling. Inductive approaches are more associated to bias during interpretation. Deductive approaches often build around designed experiments, while the strongpoint of inductive approaches is being less bound by methodological designs, which can also make bias more hidden and thus harder to detect. However, this is why qualitative approaches often have an emphasis on a concise documentation.

The connection between spatial scales and bias is rather straightforward, since the individual focus is related to cognitive bias, while system scales are more associated to prejudices, bias in academia and statistical bias. While the impact of temporal bias is less explored, forecast bias is a prominent example when it comes to future predictions, and another error is applying our cultural views and values on past humans, which has yet to be clearly named as a bias. What can be clearly said about both spatial and temporal scales is that we are often irrationally biased towards very distant entities - in space or time - and even irrationally more than we should be. We are for instance inclined to reject the importance of a distant future scenario, although it may widely follow the same odds to become a reality than a close future. For example, almost everybody would like to win the lottery tomorrow rather than win the lottery in 20 years, irrespective of your chances to live and see it happen, or the longer time you may spend with your lottery prize for the (longer) time to come. Humans are most peculiar constructed beings, and we are notorious to act irrationally. This is equally true for spatial distance. We may care irrationally more for people that are close to us as compared to people that are very distant, even independent of joined experience (e.g with friends) or joined history (e.g. with family). Again, this infers a bias which we can be aware of, but which has to be named. No doubt the current social developments will increase our capacities to recognise our biases even more, as all these phenomena also affect scientists.

The following table categorizes different types of Bias as indicated in the Wikipedia entry on Bias according to two levels of the Design Criteria of Methods .

Bias in gathering data, analysing data and interpreting data

The three steps of the application of a method are clearly worth investigating, as it allows us to dismantle at which stage we may inflict a bias into our application of a method. Gathering data is strongly associated with cognitive bias, yet also to statistical bias and partly even to some bias in academia. Bias associated to sampling can be linked to a subjective perspective as well as to systematic errors rooted in previous results. This can also affect the analysis of data, yet here one has to highlight that quantitative methods are less affected by a bias through analysis than qualitative methods. This is not a normative judgement, and can clearly be counter-measured by a sound documentation of the analytical steps. We should nevertheless not forget that there are even different assumptions about the steps of analysis in such an established field as statistics. Here, different schools of thought constantly clash regarding the optimal approach of analysis, sometimes even with different results. This exemplifies that methodological analysis can be quite normative, underlining the need for a critical perspective. This is also the case in qualitative methods, yet there it strongly depends on the specific methods, as these methods are more diverse. Concerning the interpretation of scientific results, the amount and diversity of biases is clearly the highest, or in other words, worst. While this is related to the cognition bias we have as individuals, it is also related to prejudices, bias in academia and statistical bias. Overall, we need to recognise that some methods are less associated to certain biases because they are more established concerning the norms of their application, while other methods are new and less tested by the academic community. When it comes to bias, there can be at least a weak effect that safety - although not diversity - concerning methods comes in numbers. More and diverse methods may offer new insights on biases, since one method may reveal a bias that another method cannot reveal. Methodological plurality may reduce bias. For a fully established method the understanding of its bias is often larger, because the number of times it has been applied is larger. This is especially but not always true for the analysis step, and in parts also for some methodological designs concerned with sampling. Clear documentation is however key to make bias more visible among the three stages.

Bias and philosophy

The last and by far most complex point is the root theories associated to bias. Reason, social contract and utilitarianism are the three key theories of Western philosophy relevant for empiricism, and all biases can be at least associated to one of these three foundational theories. Many cognitive bias are linked to reason or unreasonable behaviour. Much of bias relates to prejudices and society can be linked to the wide field of social contract. Lastly, some bias is clearly associated with utilitarianism. Surprisingly, utilitarianism is associated to a low amount of bias, yet it should be noted that the problem of causality within economical analysis is still up for debate. Much of economic management is rooted in correlative understandings, which are often mistaken for clear-cut causal relations. Psychology also clearly illustrates that investigating a bias is different from unconsciously inferring a bias into your research. Consciousness of bias is the basis for its recognition : if you are not aware of bias, you cannot take it into account regarding your knowledge production. While it thus seems not directly helpful to associate empirical research and its biases to the three general foundational theories of philosophy - reason, social contract and utilitatrianism -, we should still take this into account, least of all at it leads us to one of the most important developments of the 20th century: Critical Theory.

Critical Theory and Bias

Out of the growing empiricism of the enlightenment there grew a concern which we came to call Critical Theory. At the heart of critical theory is the focus on critiquing and changing society as a whole, in contrast to only observing or explaining it. Originating in Marx, Critical Theory consists of a clear distancing from previous theories in philosophy - or associated with the social - that try to understand or explain. By embedding society in its historical context (Horkheimer) and by focussing on a continuous and interchanging critique (Benjamin) Critical Theory is a first and bold step towards a more holistic perspective in science. Remembering the Greeks and also some Eastern thinkers, one could say it is the first step back to a holistic thinking. From a methodological perspective, Critical Theory is radical because it seeks to distinguish itself not only from previously existing philosophy, but more importantly from the widely dominating empiricism, and its societal as well as scientific consequences. A Critical Theory should thus be explanatory, practical and normative, and what makes it more challenging, it needs to be all these three things combined (Horkheimer). Through Habermas, Critical Theory got an embedding in democracy, yet with a critical view of what we could understand as globalisation and its complex realities. The reflexive empowerment of the individual is as much as a clear goal as one would expect, also because of the normative link to the political.

Critical Theory is thus a vital step towards a wider integration of diverse philosophies, but also from a methodological standpoint it is essential since it allowed for the emergence of a true and holistic critique of everything empirical. While this may be valued as an attack, is can also be interpreted as necessary step, since the arrogance and the claim of truth in empiricism can be interpreted not only as a deep danger to methods. Popper does not offer a true solution to positivism, and indeed he was very much hyped by many. His thought that the holy grail of knowledge can ultimately be never truly reached also generates certain problems. He can still be admired because he called for scientists to be radical, while acknowledging that most scientists are not radical. In addition, we could see it from a post-modernist perspective as a necessary step to prevent an influence of empiricism that might pose a threat to and by humankind itself, may it be through nuclear destruction, the unachievable and feeble goal of a growth economy (my wording), the naive and technocratic hoax of the eco modernists (also my wording) or any other paradigm that is short-sighted or naive. In other words, we look at the postmodern.

Critical Theory to this end is now developing to connect to other facets of the discourse, and some may argue that its focus onto the social science can be seen critical in itself, or at least as a normative choice that is clearly anthropocentric, has a problematic relationship with the empirical, and has mixed relations with its diverse offspring that includes gender research, critique of globalisation, and many other normative domains that are increasingly explored today. Building on the three worlds of Popper (the physical world, the mind world, human knowledge), we should note another possibility, that is Critical Realism. Roy Bhaskar proposed three ontological domains ( strata of knowledge ): the real (which is everything there is ), the actual ( everything we can grasp ), and the empirical ( everything we can observe ). During the last decade, humankind unlocked ever more strata of knowledge, hence much of the actual became empirical to us. We have to acknowledge that some strata of knowledge are hard to relate, or may even be unrelatable, which has consequences for our methodological understanding of the world. Some methods may unlock some strata of knowledge but not others. Some may be specific, some vague. And some may only unlock new strata based on a novel combinations. What is most relevant to this end is however, that we might look for causal links, but need to be critical that new strata of knowledge may make them obsolete. Consequently, there are no universal laws that we can thrive for, but instead endless strata to explore.

Coming back to bias, Critical Theory seems as an antidote to bias , and some may argue Critical Realism even more so, as it combines the criticality with a certain humbleness necessary when exploring the empirical and causal. The explanatory characteristic allowed by Critical Realism might be good enough for the pragmatist, the practical may speak to the modern engagement of science with and for society, and the normative is aware of – well - all things normative, including the critical. Hence a door was opened to a new mode of science, focussing on the situation and locatedness of research within the world. This was surely a head start with Kant, who opened the globe to the world of methods. There is however a critical link in Habermas, who highlighted the duality of the rational individual on a small scale and the role of global societies as part of the economy (Habermas 1987). This underlines a crucial link to the original three foundational theories in philosophy, albeit in a dramatic and focused interpretation of modernity. Habermas himself was well aware of the tensions between these two approaches – the critical and the empirical -, yet we owe it to Critical Theory and its continuations that a practical and reflexive knowledge production can be conducted within deeply normative systems such as modern democracies.

Linking to the historical development of methods, we can thus clearly claim that Critical Theory (and Critical Realism) opened a new domain or mode of thinking, and its impact can be widely felt way beyond the social science and philosophy that it affected directly. However, coming back to bias, the answer to an almost universal rejection of empiricism will not be followed here . Instead, we need to come back to the three foundational theories of philosophy, and need to acknowledge that reason, social contract and utilitarianism are the foundation of the first empirical disciplines that are at their core normative (e.g. psychology, social and political science, and economics). Since bias can be partly related to these three theories, and consequentially to specific empirical disciplines, we need to recognise that there is an overarching methodological bias. This methodological bias has a signature rooted in specific design criteria, which are in turn related to specific disciplines. Consequently, this methodological bias is a disciplinary bias - even more so, since methods may be shared among scientific disciplines, but most disciplines claim either priority or superiority when it comes to the ownership of a method.

The disciplinary bias of modern science thus creates a deeply normative methodological bias, which some disciplines may try to take into account yet others clearly not. In other words, the dogmatic selection of methods within disciplines has the potential to create deep flaws in empirical research, and we need to be aware and reflexive about this. The largest bias concerning methods is the choice of methods per se. A critical perspective is thus not only of relevance from a perspective of societal responsibility, but equally from a view on the empirical. Clear documentation and reproducibility of research are important but limited stepping stones in a critique of the methodological. This cannot replace a critical perspective, but only amends it. Empirical knowledge will only look at parts - or strata according to Roy Bhaskar - of reality, yet philosophy can offer a generalisable perspective or theory, and Critical Theory, Critical Realism as well as other current developments of philosophy can be seen as a thriving towards an integrated and holistic philosophy of science, which may ultimately link to an overaching theory of ethics (Parfit). If the empirical and the critical inform us, then both a philosophy of science and ethics may tell us how we may act based on our perceptions of reality.

Further Information

Some words on Critical Theory A short entry on critical realism

The author of this entry is Henrik von Wehrden.

  • Normativity of Methods

Powered by MediaWiki

  • How it works

What is Selection Bias – Types & Examples

Published by Owen Ingram at July 31st, 2023 , Revised On October 5, 2023

Selection bias is a common phenomenon that affects the validity and generalisability of research findings. This bias creeps into research when the selection of participants is not representative of the entire population.

Let’s look at the selection bias definition in detail. 

What is Selection Bias?

Experts give selection bias meaning as 

‘’Selection bias refers to a systematic error or distortion in the process of selecting participants or samples for a study or analysis, resulting in a non-representative or biased sample. ‘’

It occurs when certain individuals or groups are more likely to be included or excluded from the sample, leading to inaccurate or misleading conclusions.

Selection bias can occur in various fields, including research, surveys, data analysis, and decision-making processes.

Selection Bias Example

A selection bias example is a study on the effectiveness of a new medication for a particular health condition that recruits participants only from a single clinic or hospital. If the participants from this clinic or hospital have access to better healthcare facilities and resources compared to the general population, it can introduce selection bias. 

The results of the study may overestimate the effectiveness of the medication because the participants selected are not representative of the broader population with the health condition.

In this scenario, individuals seeking treatment at the specific clinic or hospital may have more severe or complex cases, leading to potentially better outcomes compared to individuals who receive treatment elsewhere or do not seek treatment at all. The study’s findings would not accurately reflect the real-world effectiveness of the medication for the entire population affected by the health condition.

What are the Types of Selection Bias?

There are several types of selection bias that can occur in research and data analysis:

  • Self-Selection Bias

This bias occurs when individuals self-select to be part of a study or sample. It can lead to a non-random sample that may not represent the broader population accurately. For example, in surveys, individuals who feel strongly about a topic are more likely to participate, resulting in a biased sample.

Non-Response Bias

Non-response bias occurs when individuals selected to participate in a study or survey do not respond or choose not to participate. If those who do not respond differ systematically from those who do, the results may be biased. For instance, if a survey on income is only completed by individuals with higher incomes, it can lead to an overestimation of average income levels.

Volunteer Bias

Volunteer bias occurs when individuals voluntarily participate in a study or research. This can lead to a non-representative sample, as those who volunteer may possess certain characteristics or motivations that differ from the general population. For example, in clinical trials, volunteers may be more motivated or have better health than the average population.

Berkson’s Bias

Berkson’s bias is common in hospital-based studies. It arises when the study population is selected from a specific group, such as hospital patients, which may have a higher prevalence of certain conditions compared to the general population. This can result in an underestimation or overestimation of the association between variables.

Healthy User Bias

This bias occurs when a study population includes individuals who are more health-conscious or have healthier behaviours than the general population. This can lead to an overestimation of the benefits of certain interventions or treatments.

Overmatching Bias

Overmatching bias occurs when controls are selected based on characteristics that are influenced by exposure or outcome. This can result in an artificially strengthened association between the exposure and outcome of interest.

Diagnostic Access Bias

Diagnostic access bias occurs when the probability of being diagnosed with a condition depends on exposure status. This bias can distort the relationship between exposure and outcome if one group has better access to diagnostic tests than the other.

What is Selection Bias in Research?

Selection bias in research refers to the systematic error or distortion that occurs when the selection of participants or subjects for a study is not random or representative of the target population. It occurs when certain individuals or groups are more likely to be included or excluded from the study, leading to a biased sample.

Selection bias can arise at various stages of research, including participant recruitment, sampling, and data collection. It can impact the internal validity and generalisability of research findings, as the sample may not accurately represent the larger population of interest.

Selection bias can occur due to various factors, such as non-random sampling methods, self-selection by participants, differential response rates, or exclusion criteria that inadvertently exclude certain groups. These factors can introduce biases that influence the characteristics and outcomes observed in the study population.

What are the Examples of Selection Bias?

Examples of selection bias in daily life can include:

Online Product Reviews

When browsing online reviews, people tend to leave reviews for products they either strongly like or strongly dislike, leading to a biased representation of overall customer satisfaction.

Social Media Feeds

Social media algorithms often personalise content based on users’ past preferences and interactions, resulting in a biased selection of information that may reinforce existing beliefs and limit exposure to diverse perspectives.

Political Surveys

Surveys conducted by political organisations or campaigns may target specific demographics or party supporters, leading to a biased sample that may not accurately represent the views of the entire population.

Restaurant Ratings

People are more likely to leave reviews for restaurants when they have an exceptionally positive or negative experience, which can skew overall ratings and fail to capture the opinions of those who had average or neutral experiences.

Job Application Processes

Hiring managers may unintentionally exhibit selection bias by favouring candidates who come from certain schools or have similar backgrounds, overlooking potential talent from other sources.

Media Coverage

Media outlets often focus on sensational or controversial stories, resulting in a biased selection of news stories that may not accurately reflect the full range of events happening in the world.

Sampling Bias in Surveys

Surveys conducted in specific locations or targeting certain demographics may not capture the opinions and experiences of the broader population, leading to biased results.

It is important to recognise these examples of selection bias and be mindful of their potential impact on our understanding of the world. Seeking diverse sources of information and actively considering alternative perspectives can help mitigate the effects of selection bias in daily life.

  • Bias in Sampling
  • Undercoverage Bias
  • Ecological Fallacy
  • Optimism Bias
  • Status Quo Bias

Hire an Expert Editor

  • Precision and Clarity
  • Zero Plagiarism
  • Authentic Sources

selection bias in critical thinking

How to Avoid Selection Bias?

To avoid selection bias in research or data analysis, consider the following strategies:

Random Sampling

Use random sampling techniques to ensure that every individual or unit in the population has an equal chance of being selected for the study. This helps to create a representative sample and minimises selection bias.

Define Inclusion Criteria Carefully

Clearly define the criteria for selecting participants or subjects based on the research objectives. This helps to ensure that the selection process is based on relevant characteristics rather than personal biases or preferences.

Increase Response Rates

Take measures to increase response rates in surveys or studies to minimise non-response bias. Follow up with non-responders, offer incentives for participation, and ensure clear and concise communication about the importance and benefits of participation.

Use Stratified Sampling

If there are specific subgroups within the population that are of interest, employ stratified sampling to ensure adequate representation of each subgroup. This helps to prevent the under-representation or over-representation of particular groups.

Avoid Self-Selection

Minimise self-selection bias by actively recruiting participants rather than relying solely on voluntary participation. Reach out to potential participants through various channels, ensuring diversity in recruitment methods.

Consider Using Blinding

In certain studies, blinding the researchers to certain participant characteristics or group assignments can help minimise bias in participant selection and data analysis.

Validate Data Against External Sources

Validate the collected data against external sources or existing datasets to assess the representativeness of the sample and identify any potential biases.

Transparency in Reporting

Clearly describe the sampling methods, inclusion criteria, and any limitations related to participant selection in the research report. This transparency helps readers and reviewers evaluate the potential impact of selection bias on the study’s findings.

Frequently Asked Questions

What is selection bias.

Selection bias refers to a systematic error or distortion in research or data analysis that occurs when the selection of participants or samples is non-random or unrepresentative, leading to biased results.

What are the types of selection bias?

There are several types of selection bias, including self-selection bias, non-response bias, volunteer bias, Berkson’s bias, healthy user bias, overmatching bias, and diagnostic access bias.

How does selection bias impact research findings?

Selection bias can lead to skewed or inaccurate research findings by introducing a non-representative sample that does not accurately reflect the broader population of interest. It can undermine the internal validity and generalisability of study results.

What are examples of selection bias?

Examples of selection bias can be observed in various contexts, such as online product reviews, social media feeds, political surveys, restaurant ratings, job application processes, media coverage, and sampling bias in surveys.

You May Also Like

In the research world, it is vital to ensure that findings are as accurate and unbiased as possible. While several types of cognitive biases can affect the validity of research, one of the more subtle yet impactful biases is ascertainment bias.

In research, ensuring the accuracy of responses is crucial to obtaining valid and reliable results. When participants’ answers are systematically influenced by external factors or internal beliefs rather than reflecting their genuine opinions or facts, it introduces response bias.

USEFUL LINKS

LEARNING RESOURCES

secure connection

COMPANY DETAILS

Research-Prospect-Writing-Service

  • How It Works

helpful professor logo

16 Selection Bias Examples

selection bias examples and definition, explained below

Selection bias occurs when the sample being studied is not representative of the population from which the sample was drawn, leading to skewed or misleading results (Walliman, 2021).

In these situations, the sample under study deviates from a fair, random, and equitable selection process. This influences the outcomes and interpretations of a research study.

A common situation where selection bias affects results is in electoral polling. If the sample that the pollster interviews skews older than the general population, or has a disproportionate amount of men or women compared to the general population, then the data will be wrong. As a result, we might get a shock on election day!

Selection Bias Examples

1. sampling bias.

Sampling bias occurs when a researcher selects sampling methods that aren’t representative of the entire population, thereby introducing bias in the representation (Atkinson et al., 2021).

A common example is convenience sampling, where individuals are chosen based on their proximity or accessibility, rather than considering the characteristics of the larger population.

In behavioral science, for example, reliance on undergraduate students as subjects limits the application of many theories to different age groups (Liamputtong, 2020). This is not a representation of the wider population, hence introducing sampling bias.

This form of bias affects the generalizability and external validity of the results (Busetto et al., 2020). Therefore it is crucial to balance representativeness and accessibility while designing the sample strategy (Walliman, 2021).

2. Self-selection Bias

Self-selection bias arises when participants are given the choice to participate in a study and the ones who opt in or out are systematically different from the others (Suter, 2011).

The study findings may not accurately represent the entire population as those who self-selected may have specific characteristics or behaviors influencing the research outcome (Creswell, 2013). For instance, individuals who agree to be part of a weight loss study might already be motived to lose weight.

This bias skews the resulting data and lacks inferential power to the broader population (Bryman, 2015). Hence, while presenting an opportunity for participant autonomy, self-selection bias calls for cautious interpretation of data.

3. Exclusion Bias

Exclusion bias refers to the systematic exclusion of certain individuals from the sample.

It could be due to the specific criteria defined in the study design or the involuntary or intentional exclusion of groups resulting from the recruitment strategy (Walliman, 2021). For example, a study on work productivity excluding night-shift workers will have an exclusion bias.

This form of bias threatens the internal validity of the study as it implies a differential selection of subjects into study groups (Atkinson et al., 2021). Thus, researchers should ensure their selection criteria do not create an undue bias.

4. Berkson’s bias

Berkson’s Bias, named after American statistician Joseph Berkson, is a form of selection bias seen commonly in medical research.

This bias takes place when the selection of subjects into a study is related to their exposure and outcome (Barker et al., 2016). For example, if a study is conducted in a hospital setting, it would more likely attract people who are ill and seeking treatment than healthy individuals, leading to an overrepresentation of one group.

Understanding that this bias decouples the results from generalizability, ensuring a diverse sample becomes integral (Liamputtong, 2020).

5. Undercoverage Bias

Undercoverage bias happens when some groups of the population are inadequately represented in the sample (Creswell, 2013).

Similar to exclusion bias, it can emerge because the researchers did not reach out to certain groups, or those groups could not respond due to barriers (Bryman, 2015). An example would be a telephone survey that only includes landline numbers, thereby excluding a huge sector of the population, mainly the tech-savvy younger generation.

Acknowledging and factoring in such underrepresentation ensures a more accurate result (Suter, 2011).

6. Cherry Picking

Cherry-picking is a type of selection bias, which involves selectively presenting, emphasizing, or excluding data that support a particular conclusion while neglecting significant data that may contradict it (Bryman, 2015).

It can lead to inaccurate or misleading findings because the research results have been skewed deliberately. An example could be climate change deniers who selectively focus on particular periods to argue that global warming isn’t happening or isn’t serious.

Researchers must be explicit about their selection process and should refrain from selectively highlighting or suppressing data (Walliman, 2021).

7. Survivorship Bias

Survivorship bias is a bias that can occur when the focus is solely on the subjects that “survived” or succeeded, dismissing those that failed or dropped out (Atkinson et al., 2021).

This can clearly skew the results as important factors contributing to failure or dropout might be overlooked. An example is in entrepreneurship where stories of successful founders are commonly told while ignoring the much larger number of entrepreneurs who failed.

To avoid this bias, researchers need to consider the whole spectrum of outcomes (Busetto, Wick, & Gumbinger, 2020).

Read More: Survivorship Bias Examples

8. Time Interval Bias

Time Interval Bias arises when the time between measurements or observations is inconsistent, thereby inflating or reducing the observed effects or associations in a study (Atkinson et al., 2021).

The choice of time intervals is vital, and different intervals can lead to different results. For example, tracking a group of patients’ recovery weekly might lead to a different outcome than if the analysis was done monthly.

While varying the intervals might capture more detail, it may also lead to an overestimation of some results (Suter, 2011).

Researchers need to carefully consider the most accurate and reasonable intervals to mitigate such bias (Bryman, 2015). Therefore, understanding the implication of time intervals is critical for truthful representation and valid interpretation of data (Walliman, 2021).

9. Attrition Bias

Attrition Bias, also known as dropout bias, comes into play when participants exit a study before its completion, leading to a skewness in the final results (Bryman, 2015).

This departure can be associated with certain characteristics or responses to the study, thus altering the distribution of variables within the remaining sample (Walliman, 2021). An example would be participants dropping out of a drug efficacy study due to intense side effects.

If many of these participants belonged to the group that received the new drug, the remaining participants would likely show results biased in favor of the new drug’s efficacy (Atkinson et al., 2021).

To control for attrition bias, strategies such as bolstering participant engagement and using intention-to-treat analysis should be considered (Suter, 2011). Therefore, attention to withdrawal reasons and early identification of potential dropout factors are critical aspects of research design and execution (Creswell, 2013).

10. Non-response Bias

Non-response bias arises when the characteristics of those who choose to participate in a study differ significantly from those who do not respond.

For instance, in a survey about personal health habits, individuals with poor health habits may be less likely to respond. Hence, the data would underestimate the prevalence of poor health habits in the population (Walliman, 2021).

To mitigate this bias, researchers can adopt strategies such as contacting non-responders repeatedly or offering incentives to improve response rates (Barker, Pistrang, & Elliott, 2016).

11. Volunteer Bias

Volunteer bias transpires when individuals who volunteer for a research study are fundamentally different from the ones who decline to participate (Atkinson et al., 2021).

Their eagerness to participate is often reflective of strong opinions or experiences related to the research topic. It creates a skewed representation as the volunteers may be more educated, affluent, or health-conscious than the broader population (Creswell, 2013). For instance, in a study regarding alcohol consumption patterns, non-drinkers or moderate drinkers may be less inclined to respond.

Therefore, caution must be exercised while making inferences from volunteer-based data as they have a higher propensity for experience distortion (Suter, 2011). Thus, purposive sampling strategies must be employed to ensure a balanced representation (Bryman, 2015).

12. Healthy User Effect

The healthy user effect, or a health-conscious bias, arises when participants who voluntarily engage in health behavior or treatment studies are generally healthier, more educated, and compliant than the average populace (Walliman, 2021).

This participation can cause an overestimation of the benefits of the health behavior or treatment being studied (Atkinson et al., 2021). A classic example would be a study of the impacts of a healthy diet, where individuals already conscious about their food choices, are more likely to participate (Bryman, 2015).

Such selective participation skews the outcomes towards favorable results (Liamputtong, 2020). So it’s paramount that researchers control for health consciousness in their analysis to ensure the effects being studied are indeed due to the intervention and not related to healthier behaviors (Barker et al., 2016).

13. Exposure Bias

Exposure bias operates when there are inconsistencies or errors in measuring an individual’s exposure to a certain factor or condition in a research study (Suter, 2011).

This might occur when a study measures participants’ sun exposure levels without considering their sunscreen usage, leading to an overestimation of sun exposure and its effects (Bryman, 2015).

Such flawed measurement can consequently undermine the validity and reliability of the research findings (Walliman, 2021). As a result, it’s crucial to consider and control for confounding variables that might affect exposure levels (Barker et al., 2016).

Importantly, employing consistent and objective methods of measurement helps to minimize exposure bias (Liamputtong, 2020).

14. Location Bias

Location bias is a sample distortion that emerges when the setting for data collection influences the research results, making them unrepresentative of the wider population (Atkinson et al., 2021).

If a study on physical fitness is conducted solely in a gym, the results would most likely present a fitness level higher than the general population (Suter, 2011). This location-specific data might falsely represent the overall fitness levels because a gym environment already attracts more physically active people (Creswell, 2013).

To avoid this bias, researchers should aim to diversify the settings for data collection, ensuring they are reflective of various environments where the target population might be found (Liamputtong, 2020). Therefore, an understanding of the potential influence of the study location is crucial to reduce location bias (Bryman, 2015).

15. Referral Bias

Referral bias appears in studies when the sampled population has been specifically referred from another source, creating potential unrepresentativeness (Barker et al., 2016).

This type of bias is common in healthcare research, whereby patients referred for specialized care are investigated (Walliman, 2021). Forwarding these patients into a study could misrepresent the condition’s severity as they are already pre-selected based on their need for specialized care (Creswell, 2013).

Consequently, the outcomes of such studies could overestimate disease severity or the effectiveness of specialized treatment (Atkinson et al., 2021). Thus, understanding and considering referral patterns and their implications is a crucial step in mitigating referral bias in research (Suter, 2011).

16. Pre-screening of Subjects

Pre-screening of subjects happens when researchers follow a vetting process to determine whether potential participants are suitable for a study (Walliman, 2021).

This process could inadvertently exclude certain individuals or groups, leading to a biased, non-representative sample (Atkinson et al., 2021).

An example of pre-screening bias is when a study on heart diseases excludes individuals with a history of hypertension. As a result, it could potentially understate the severity of heart conditions as it does not account for such overlapping conditions (Bryman, 2015).

Thus, careful balancing must be undertaken during pre-screening to ensure the sample reflects the wider research context whilst adhering to study-specific needs (Creswell, 2013). Importantly, the implications of pre-screening should be acknowledged in any resulting data interpretations (Liamputtong, 2020).

What’s Wrong with Selection Bias?

Selection bias can and does skew results. This is an overarching issue in both qualitative and quantitative research, as biases may emerge from the chosen selection methods, either intentionally or unintentionally (Busetto, Wick, & Gumbinger, 2020).

Diverse factors such as geography, socioeconomic status, or personal preferences can influence participant choice and thereby introduce bias.

Selection bias ultimately reduces both external and internal validity:

  • External validity is compromised as the biased sample is not representative of the larger population, making it hard to generalize the findings. (See: Threats to external validity ).
  • Internal validity is compromised because the bias introduces additional variables, making it challenging to confirm whether the observed effect is due to the experiment itself or the bias (See: Threats to internal validity ).

Overall, selection bias contravenes scientific research principles because it potentially leads to inaccurate findings and breaks the trust between the researcher and the public or scientific community.

Combatting Selection Bias: Specialized Methodologies

Addressing selection bias is vital for maintaining the integrity of research outcomes. By combining careful planning, methodological rigor, statistical expertise, and transparency , significant strides can be made in reducing this type of bias (Walliman, 2021).

Specifically, here are four techniques:

1. Stratified Sampling

Stratified sampling is a method in which the larger population is first divided into distinct, non-overlapping subgroups or “strata” based on specific characteristics or variables (Atkinson et al., 2021).

These could be attributes like age range, geographic location, or socio-economic groups. The next step is to randomly select samples from each stratum.

The benefit of stratified sampling technique is it allows to achieve a sample that is more representative of the diversity in the population (Bryman, 2015). So, instead of treating the population as homogenous, it respects the heterogeneity and reduces the risk of under-representation.

2. Randomization

Randomization is the process of assigning individuals to groups randomly within a study. It ensures that each participant has an equal chance of being assigned to any group, thereby minimizing the risk of selection bias (Creswell, 2013).

Importantly, it also helps to distribute the features of participants evenly across groups. As the distribution is random, differences in outcome can be more confidently attributed to differing interventions rather than underlying differences in the groups.

Its key strength is that it supports causal inferences by balancing both known and unknown confounds (Suter, 2011).

3. Propensity Score Matching

Propensity score matching (PSM) is a statistical method that attempts to estimate the effect of an intervention, treatment, or policy by accounting for the covariates that predict receiving the treatment (Busetto et al., 2020).

Essentially, it matches individuals in the treated group with individuals in the control group with similar “propensity scores” or predicted probabilities of receiving the treatment.

By balancing the observed characteristics between treated and control groups in this manner, PSM helps to mimic a randomized controlled trial and minimize selection bias in non-experimental studies (Barker et al., 2016).

4. Instrumental Variable Methods

Instrumental variable (IV) methods are used in situations where random assignment is not feasible, and there’s potential for uncontrolled confounding (Atkinson et al., 2021).

An instrument is a variable that affects the treatment status but does not independently affect the outcome, except through its effect on treatment status (Walliman, 2021).

The goal of IV methods is to remove bias in the estimated treatment effects by isolating the variability in treatment that is not due to confounding (Bryman, 2015). It’s a powerful tool addressing selection bias in observational studies, but finding a valid instrument can be challenging (Liamputtong, 2020).

Overcoming selection bias requires meticulous planning, proper sample selection, and unbiased data analysis (Bryman, 2015). Responsible research and commitment to ethical guidelines will significantly reduce cases of selection bias.

To conclude, selection bias, as emphasized by Liamputtong (2020), is one of the significant forms of bias in research. Its influence can markedly distort research outcomes, and considerable efforts must be made to identify, control, and mitigate its impact on research findings.

Atkinson, P., Delamont, S., Cernat, A., Sakshaug, J., & Williams, R. A. (2021). SAGE research methods foundations . London: Sage Publications.

Barker, C., Pistrang, N., & Elliott, R. (2016). Research Methods in Clinical Psychology: An Introduction for Students and Practitioners . London: John Wiley & Sons.

Bryman, A. (2015). The SAGE Handbook of Qualitative Research . London: Sage Publications.

Busetto, L., Wick, W., & Gumbinger, C. (2020). How to use and assess qualitative research methods. Neurological Research and practice, 2 , 1-10.

Creswell, J. W. (2013). Research Design: Qualitative, Quantitative and Mixed Methods Approaches . New York: Sage Publications.

Liamputtong, P. (2020). Qualitative research methods . New York: Sage.

Walliman, N. (2021). Research methods: The basics . Los Angeles: Routledge.

Suter, W. N. (2011). Introduction to educational research: A critical thinking approach . London: SAGE publications.

Chris

Chris Drew (PhD)

Dr. Chris Drew is the founder of the Helpful Professor. He holds a PhD in education and has published over 20 articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education. [Image Descriptor: Photo of Chris]

  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 5 Top Tips for Succeeding at University
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 50 Durable Goods Examples
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 100 Consumer Goods Examples
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 30 Globalization Pros and Cons

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Guided Meditations
  • Verywell Mind Insights
  • 2023 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

13 Types of Common Cognitive Biases That Might Be Impairing Your Judgment

Which of these sway your thinking the most?

Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

selection bias in critical thinking

Amy Morin, LCSW, is a psychotherapist and international bestselling author. Her books, including "13 Things Mentally Strong People Don't Do," have been translated into more than 40 languages. Her TEDx talk,  "The Secret of Becoming Mentally Strong," is one of the most viewed talks of all time.

selection bias in critical thinking

The Confirmation Bias

The hindsight bias, the anchoring bias, the misinformation effect, the actor-observer bias, the false consensus effect, the halo effect, the self-serving bias, the availability heuristic, the optimism bias.

  • Other Kinds

Although we like to believe that we're rational and logical, the fact is that we are continually under the influence of cognitive biases . These biases distort thinking , influence beliefs, and sway the decisions and judgments that people make each and every day.

Sometimes, cognitive biases are fairly obvious. You might even find that you recognize these tendencies in yourself or others. In other cases, these biases are so subtle that they are almost impossible to notice.

At a Glance

Attention is a limited resource. This means we can't possibly evaluate every possible detail and event ​when forming thoughts and opinions. Because of this, we often rely on mental shortcuts that speed up our ability to make judgments, but this can sometimes lead to bias. There are many types of biases—including the confirmation bias, the hindsight bias, and the anchoring bias, just to name a few—that can influence our beliefs and actions daily.

The following are just a few types of cognitive biases that have a powerful influence on how you think, how you feel, and how you behave.

Tara Moore / Getty Images

The confirmation bias is the tendency to listen more often to information that confirms our existing beliefs. Through this bias, people tend to favor information that reinforces the things they already think or believe.

Examples include:

  • Only paying attention to information that confirms your beliefs about issues such as gun control and global warming
  • Only following people on social media who share your viewpoints
  • Choosing news sources that present stories that support your views
  • Refusing to listen to the opposing side
  • Not considering all of the facts in a logical and rational manner

There are a few reasons why this happens. One is that only seeking to confirm existing opinions helps limit mental resources we need to use to make decisions. It also helps protect self-esteem by making people feel that their beliefs are accurate.

People on two sides of an issue can listen to the same story and walk away with different interpretations that they feel validates their existing point of view. This is often indicative that the confirmation bias is working to "bias" their opinions.

The problem with this is that it can lead to poor choices, an inability to listen to opposing views, or even contribute to othering people who hold different opinions.

Things that we can do to help reduce the impact of confirmation bias include being open to hearing others' opinions and specifically looking for/researching opposing views, reading full articles (and not just headlines), questioning the source, and [doing] the research yourself to see if it is a reliable source.

The hindsight bias is a common cognitive bias that involves the tendency to see events, even random ones, as more predictable than they are. It's also commonly referred to as the "I knew it all along" phenomenon.

Some examples of the hindsight bias include:

  • Insisting that you knew who was going to win a football game once the event is over
  • Believing that you knew all along that one political candidate was going to win an election
  • Saying that you knew you weren't going to win after losing a coin flip with a friend
  • Looking back on an exam and thinking that you knew the answers to the questions you missed
  • Believing you could have predicted which stocks would become profitable

Classic Research

In one classic psychology experiment, college students were asked to predict whether they thought then-nominee Clarence Thomas would be confirmed to the U.S. Supreme Court.

Prior to the Senate vote, 58% of the students thought Thomas would be confirmed. The students were polled again following Thomas's confirmation, and a whopping 78% of students said they had believed Thomas would be confirmed.  

The hindsight bias occurs for a combination of reasons, including our ability to "misremember" previous predictions, our tendency to view events as inevitable, and our tendency to believe we could have foreseen certain events.

The effect of this bias is that it causes us to overestimate our ability to predict events. This can sometimes lead people to take unwise risks.

The anchoring bias is the tendency to be overly influenced by the first piece of information that we hear. Some examples of how this works:

  • The first number voiced during a price negotiation typically becomes the anchoring point from which all further negotiations are based.
  • Hearing a random number can influence estimates on completely unrelated topics.
  • Doctors can become susceptible to the anchoring bias when diagnosing patients. The physician’s first impressions of the patient often create an anchoring point that can sometimes incorrectly influence all subsequent diagnostic assessments.

While the existence of the anchoring bias is well documented, its causes are still not fully understood. Some research suggests that the source of the anchor information may play a role. Other factors such as priming and mood also appear to have an influence.

Like other cognitive biases, anchoring can have an effect on the decisions you make each day. For instance, it can influence how much you are willing to pay for your home. However, it can sometimes lead to poor choices and make it more difficult for people to consider other factors that might also be important.

The misinformation effect is the tendency for memories to be heavily influenced by things that happened after the actual event itself. A person who witnesses a car accident or crime might believe that their recollection is crystal clear, but researchers have found that memory is surprisingly susceptible to even very subtle influences.

For example:

  • Research has shown that simply asking questions about an event can change someone's memories of what happened.
  • Watching television coverage may change how people remember the event.
  • Hearing other people talk about a memory from their perspective may change your memory of what transpired.

Classic Memory Research

In one classic experiment by memory expert Elizabeth Loftus , people who watched a video of a car crash were then asked one of two slightly different questions: “How fast were the cars going when they hit each other?” or “How fast were the cars going when they smashed into each other?”  

When the witnesses were then questioned a week later whether they had seen any broken glass, those who had been asked the “smashed into” version of the question were more likely to report incorrectly that they had seen broken glass.

There are a few factors that may play a role in this phenomenon. New information may get blended with older memories.   In other cases, new information may be used to fill in "gaps" in memory.

The effects of misinformation can range from the trivial to much more serious. It might cause you to misremember something you thought happened at work, or it might lead to someone incorrectly identifying the wrong suspect in a criminal case.

The actor-observer bias is the tendency to attribute our actions to external influences and other people's actions to internal ones. The way we perceive others and how we attribute their actions hinges on a variety of variables, but it can be heavily influenced by whether we are the actor or the observer in a situation.

When it comes to our own actions, we are often far too likely to attribute things to external influences. For example:

  • You might complain that you botched an important meeting because you had jet lag.
  • You might say you failed an exam because the teacher posed too many trick questions.

When it comes to explaining other people’s actions, however, we are far more likely to attribute their behaviors to internal causes. For example:

  • A colleague screwed up an important presentation because he’s lazy and incompetent (not because he also had jet lag).
  • A fellow student bombed a test because they lack diligence and intelligence (and not because they took the same test as you with all those trick questions).

While there are many factors that may play a role, perspective plays a key role. When we are the actors in a situation, we are able to observe our own thoughts and behaviors. When it comes to other people, however, we cannot see what they are thinking. This means we focus on situational forces for ourselves, but guess at the internal characteristics that cause other people's actions.

The problem with this is that it often leads to misunderstandings. Each side of a situation is essentially blaming the other side rather than thinking about all of the variables that might be playing a role.

The false consensus effect is the tendency people have to overestimate how much other people agree with their own beliefs, behaviors, attitudes, and values. For example:

  • Thinking that other people share your opinion on controversial topics
  • Overestimating the number of people who are similar to you
  • Believing that the majority of people share your preferences

Researchers believe that the false consensus effect happens for a variety of reasons. First, the people we spend the most time with, our family and friends, do often tend to share very similar opinions and beliefs. Because of this, we start to think that this way of thinking is the majority opinion even when we are with people who are not among our group of family and friends.

Another key reason this cognitive bias trips us up so easily is that believing that other people are just like us is good for our self-esteem . It allows us to feel "normal" and maintain a positive view of ourselves in relation to other people.

This can lead people not only to incorrectly think that everyone else agrees with them—it can sometimes lead them to overvalue their own opinions. It also means that we sometimes don't consider how other people might feel when making choices.

The halo effect is the tendency for an initial impression of a person to influence what we think of them overall. Also known as the "physical attractiveness stereotype" or the "what is beautiful is 'good' principle" we are either influenced by or use the halo to influence others almost every day. For example:

  • Thinking people who are good-looking are also smarter, kinder, and funnier than less attractive people
  • Believing that products marketed by attractive people are also more valuable
  • Thinking that a political candidate who is confident must also be intelligent and competent

One factor that may influence the halo effect is our tendency to want to be correct. If our initial impression of someone was positive, we want to look for proof that our assessment was accurate. It also helps people avoid experiencing cognitive dissonance , which involves holding contradictory beliefs.

This cognitive bias can have a powerful impact in the real world. For example, job applicants perceived as attractive and likable are also more likely to be viewed as competent, smart, and qualified for the job.

The self-serving bias is a tendency for people tend to give themselves credit for successes but lay the blame for failures on outside causes. When you do well on a project, you probably assume that it’s because you worked hard. But when things turn out badly, you are more likely to blame it on circumstances or bad luck.

Some examples of this:

  • Attributing good grades to being smart or studying hard
  • Believing your athletic performance is due to practice and hard work
  • Thinking you got the job because of your merits

The self-serving bias can be influenced by a variety of factors. Age and sex have been shown to play a part. Older people are more likely to take credit for their successes, while men are more likely to pin their failures on outside forces.  

This bias does serve an important role in protecting self-esteem. However, it can often also lead to faulty attributions such as blaming others for our own shortcomings.

The availability heuristic is the tendency to estimate the probability of something happening based on how many examples readily come to mind. Some examples of this:

  • After seeing several news reports of car thefts in your neighborhood, you might start to believe that such crimes are more common than they are.
  • You might believe that plane crashes are more common than they really are because you can easily think of several examples.

It is essentially a mental shortcut designed to save us time when we are trying to determine risk. The problem with relying on this way of thinking is that it often leads to poor estimates and bad decisions.

Smokers who have never known someone to die of a smoking-related illness, for example, might underestimate the health risks of smoking. In contrast, if you have two sisters and five neighbors who have had breast cancer, you might believe it is even more common than statistics suggest.

The optimism bias is a tendency to overestimate the likelihood that good things will happen to us while underestimating the probability that negative events will impact our lives. Essentially, we tend to be too optimistic for our own good.

For example, we may assume that negative events won't affect us such as:

The optimism bias has roots in the availability heuristic. Because you can probably think of examples of bad things happening to other people it seems more likely that others will be affected by negative events.

This bias can lead people to take health risks like smoking, eating poorly, or not wearing a seat belt. The bad news is that research has found that this optimism bias is incredibly difficult to reduce.

There is good news, however. This tendency toward optimism helps create a sense of anticipation for the future, giving people the hope and motivation they need to pursue their goals.

Other Kinds of Cognitive Bias

Many other cognitive biases can distort how we perceive the world. Just a partial list:

  • Status quo bias reflects a desire to keep things as they are.
  • Apophenia is the tendency to perceive patterns in random occurrences.
  • Framing is presenting a situation in a way that gives a certain impression.

Keep in Mind

The cognitive biases above are common, but this is only a sampling of the many biases that can affect your thinking. These biases collectively influence much of our thoughts and ultimately, decision making.

Many of these biases are inevitable. We simply don't have the time to evaluate every thought in every decision for the presence of any bias. Understanding these biases is very helpful in learning how they can lead us to poor decisions in life.

Dietrich D, Olson M. A demonstration of hindsight bias using the Thomas confirmation vote . Psychol Rep . 1993;72(2):377-378. doi:/10.2466/pr0.1993.72.2.377

Lee KK.  An indirect debiasing method: Priming a target attribute reduces judgmental biases in likelihood estimations .  PLoS ONE . 2019;14(3):e0212609. doi:10.1371/journal.pone.0212609

Saposnik G, Redelmeier D, Ruff CC, Tobler PN. Cognitive biases associated with medical decisions: A systematic review .  BMC Med Inform Decis Mak . 2016;16(1):138. doi:10.1186/s12911-016-0377-1

Furnham A., Boo HC. A literature review of anchoring bias .  The Journal of Socio-Economics.  2011;40(1):35-42. doi:10.1016/j.socec.2010.10.008

Loftus EF.  Leading questions and the eyewitness report .  Cognitive Psychology . 1975;7(4):560-572. doi:10.1016/0010-0285(75)90023-7

Challies DM, Hunt M, Garry M, Harper DN. Whatever gave you that idea? False memories following equivalence training: a behavioral account of the misinformation effect .  J Exp Anal Behav . 2011;96(3):343-362. doi:10.1901/jeab.2011.96-343

Miyamoto R, Kikuchi Y.  Gender differences of brain activity in the conflicts based on implicit self-esteem .  PLoS ONE . 2012;7(5):e37901. doi:10.1371/journal.pone.0037901

Weinstein ND, Klein WM.  Resistance of personal risk perceptions to debiasing interventions .  Health Psychol . 1995;14(2):132–140. doi:10.1037//0278-6133.14.2.132

Gratton G, Cooper P, Fabiani M, Carter CS, Karayanidis F. Dynamics of cognitive control: theoretical bases, paradigms, and a view for the future . Psychophysiology . 2018;55(3). doi:10.1111/psyp.13016

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

selection bias in critical thinking

Demystifying Selection Bias

Imagine that you’re building a product for small and medium businesses that have adopted sustainable practices. You circulate surveys and…

Demystifying Selection Bias

Imagine that you’re building a product for small and medium businesses that have adopted sustainable practices. You circulate surveys and conduct interviews to arrive at a particular conclusion. ( Xanax ) You reach out to your social circle and ask them to forward the survey to others they know. Now take a moment and reflect on the kind of people who participated in your study. The data will most likely be skewed because people weren’t selected at random. The results you derive won’t be representative of the target population you’re hoping to gain insights from.

This is how selection bias psychology plays out and leads to unintentional discrimination. Let’s explore how selection bias affects our daily life, especially workplace settings, in greater detail.

Demystifying The Definition Of Selection Bias

Unconscious bias, also known as cognitive bias, is a way in which our minds take shortcuts while processing information. Our decision-making and critical thinking skills are compromised as we jump to conclusions. Unconscious bias can affect workplaces and organizations to a great extent—making them less inclusive and diverse. However, we can defeat and overcome such unintentional discrimination and make informed decisions.

There are various types of biases that have the potential to impact the hiring, mentoring and promotion processes in professional settings. One such commonly occurring bias is the selection bias. You’ve probably come across examples of selection bias in research and data sampling. Selection bias refers to a situation when you’re unable to randomize data or participants. You reach an incorrect (biased) conclusion because participants weren’t fairly selected.

What Are The Types Of Selection Bias?

It isn’t easy to tackle biases because human brains aren’t wired that way. We take mental shortcuts, even if it distorts thought processes. We can’t be 100% bias-free, but we can keep biases in check. Let’s explore the different types of selection bias and the effective ways of challenging them:

Sampling Bias

The most common type of selection bias, sampling bias occurs when you draw incorrect (biased) conclusions after analyzing a subset of data (sample) because of your participant pool. In the example stated above, reaching out to common social circles makes room for sampling bias.

When you have a limited circle, people who are similar to each other will constitute the entire participant pool. You’ll neither have variety nor differing insights to enhance the quality of your study. Consider switching the channels of broadcasting your survey.

Pre-Screening

This type of selection bias is commonly seen during hiring. Employers often tend to pre-screen candidates’ profiles (for example, checking LinkedIn profiles) before meeting them in person. Imagine that you are a hiring manager and an alumnus of an elite business school. A candidate who graduated from the same school is more likely to grab your attention.

Pre-screening bias in hiring can be tackled through double-blind interviews. This means that you ask candidates to block out personal details and make the application as objective as possible. However, the process is time-consuming and isn’t always feasible in fast-paced work environments.

Survivorship Bias

It occurs when you select a sample or data that passed the selection process and ignore the subjects or information that didn’t. In other words, you focus only on that part of the data or sample that already went through some kind of pre-selection process and overlook other details because they’re not visible anymore. This leads to overly optimistic and incorrect results.

Survivorship bias can be commonly observed in startup businesses. Entrepreneurs are often under the impression that their success is solely the result of hard work. However, many factors influence the success of any business. Maybe the timing was good or the investors were looking to put their money into that particular product. Whatever the reason be, you need to be mindful of this type of selection bias.

Harappa Education’s   Making Decisions course will teach you how to navigate biases in the most effective way. The Good Decision Process will teach you how to scrutinize situations and arrive at smart decisions. The PRISM Framework will help you deal with the negative consequences of cognitive biases. Learn the basics of good decision-making and never doubt yourself again!

Explore topics such as  Decision-Making ,  Types of Decision-Making ,  Strategic Decision-Making  &  Decision Tree Analysis  from Harappa Diaries and learn to make decisions without any personal bias.

Thriversitybannersidenav

  • Exploring Trial and Error Problem Solving Strategies
  • Mind Mapping and Listing Ideas
  • Collaborative Problem Solving Games: Exploring Creative Solutions for Teams
  • Managing Your Finances Effectively
  • Analytical problem solving
  • Identifying root causes
  • Analyzing consequences
  • Brainstorming solutions
  • Heuristic problem solving
  • Using analogies
  • Applying existing solutions
  • Trial and error
  • Creative problem solving
  • Mind mapping
  • Brainstorming
  • Lateral thinking
  • Research skills
  • Interpreting information
  • Data collection and analysis
  • Identifying patterns
  • Critical thinking skills
  • Recognizing bias
  • Analyzing arguments logically
  • Questioning assumptions
  • Communication skills
  • Negotiation and compromise
  • Listening skills
  • Explaining ideas clearly
  • Planning techniques
  • SWOT analysis
  • Gantt charting
  • Critical path analysis
  • Decision making techniques
  • Force field analysis
  • Paired comparison analysis
  • Cost-benefit analysis
  • Root cause analysis
  • Five whys technique
  • Fault tree analysis
  • Cause and effect diagrams
  • Brainstorming techniques
  • Brainwriting
  • Brainwalking
  • Round-robin brainstorming
  • Creative thinking techniques
  • Serendipity technique
  • SCAMPER technique
  • Innovation techniques
  • Value innovation techniques
  • Design thinking techniques
  • Idea generation techniques
  • Personal problems
  • Deciding what career to pursue
  • Managing finances effectively
  • Solving relationship issues
  • Business problems
  • Increasing efficiency and productivity
  • Improving customer service quality
  • Reducing costs and increasing profits
  • Environmental problems
  • Preserving natural resources
  • Reducing air pollution levels
  • Finding sustainable energy sources
  • Individual brainstorming techniques
  • Thinking outside the box
  • Word association and random word generation
  • Mind mapping and listing ideas
  • Group brainstorming techniques
  • Synectics technique
  • Online brainstorming techniques
  • Online whiteboarding tools
  • Virtual brainstorming sessions
  • Collaborative mind mapping software
  • Team activities
  • Group decision making activities
  • Debate activities and role-play scenarios
  • Collaborative problem solving games
  • Creative activities
  • Creative writing exercises and storyboards
  • Imagination activities and brainstorming sessions
  • Visualization activities and drawing exercises
  • Games and puzzles
  • Crossword puzzles and Sudoku
  • Logic puzzles and brain teasers
  • Jigsaw puzzles and mazes
  • Types of decisions
  • Structured decisions
  • Simple decisions
  • Complex decisions
  • Problem solving skills
  • Recognizing Bias: A Problem Solving and Critical Thinking Skills Guide

Learn how to identify and address bias in decision making with our guide to recognizing bias in problem solving and critical thinking.

Recognizing Bias: A Problem Solving and Critical Thinking Skills Guide

In today's world, it is becoming increasingly important to recognize bias and how it can affect our decision-making. Bias can cloud our judgement, lead us to make decisions that are not in our best interests, and limit our ability to solve problems effectively. In this guide, we will explore the concept of recognizing bias and how it can be used as a tool for developing critical thinking and problem-solving skills. We will discuss the various types of biases, why recognizing them is important, and how to identify and counteract them.

Confirmation bias

Cognitive bias.

This type of bias can lead to unfair judgments or decisions. Other common types of bias include cultural bias, which is the tendency to favor one’s own culture or group; and political bias, which is the tendency to favor one’s own political party or beliefs. In order to identify and address bias in oneself and others, it is important to be aware of potential sources of bias. This includes personal opinions, values, and preconceived notions. Being mindful of these potential sources of bias can help us become more aware of our own biases and recognize them in others.

Additionally, it is important to be open-minded and willing to consider alternative perspectives. Additionally, it is helpful to challenge our own assumptions and beliefs by questioning them and seeking out evidence that supports or refutes them. The potential implications of not recognizing or addressing bias are significant. If left unchecked, biases can lead to unfair decisions or judgments, as well as inaccurate conclusions. This can have serious consequences for individuals and organizations alike.

Implications of Not Recognizing or Addressing Bias

Strategies for identifying and addressing bias.

Recognizing bias in oneself and others is an important part of making informed decisions. There are several strategies that can be used to identify and address bias. One of the most effective strategies is to take a step back and look at the situation objectively. This involves examining the facts and assumptions that are being used to make decisions.

It can also involve assessing the potential impact of decisions on multiple stakeholders. By removing personal biases from the equation, it is possible to make more informed decisions. Another important strategy for identifying and addressing bias is to question the sources of information. It is important to consider the credibility of sources, as well as any potential biases that may be present.

Fact-checking sources and considering multiple perspectives can help identify any potential biases in the information being used. In addition, it is important to remain aware of our own biases. We all have preconceived notions about certain topics that can affect our decision-making process. By being mindful of our biases, we can avoid making decisions that are influenced by them. Finally, it is important to be open to other perspectives and willing to engage in meaningful dialogue with others.

Types of Bias

Halo effect, what is bias.

It can be an unconscious preference that influences decision making and can lead to adverse outcomes. It is important to recognize bias because it can have a negative impact on our ability to make sound decisions and engage in problem solving and critical thinking. Bias can manifest itself in various ways, from subtle mental shortcuts to overt prejudices. Types of bias include confirmation bias, where we seek out information that confirms our existing beliefs; availability bias, where we base decisions on the information that is most readily available; and representativeness bias, where we assume that two events or objects are related because they share similar characteristics. Other forms of bias include halo effect, where a single positive quality or trait can influence the perception of an entire person; and stereotyping, which is the tendency to make judgments about individuals based on their perceived membership in a certain group. It is important to recognize bias in ourselves and others so that we can make informed decisions and engage in problem solving and critical thinking.

Sources of Bias

Bias can have a profound effect on decisions, leading to outcomes that are not based on facts or evidence. Personal opinions and values can lead to biased decision-making. They can be shaped by past experiences, cultural background , and other personal factors. For example, someone's opinion about a certain topic may be based on what they have previously heard or read. Similarly, preconceived notions can also lead to biased conclusions. Cultural norms can also play a role in creating bias.

For instance, people may be more likely to believe information from a source they trust or respect, even if it is not based on fact. Similarly, people may be more likely to make decisions that conform to the expectations of their culture or society. In addition, people can also be influenced by their own prejudices or stereotypes. This type of bias can lead to unfair treatment of certain individuals or groups of people. Finally, it is important to be aware of the potential for confirmation bias, where people will seek out information that confirms their existing beliefs and disregard any contradictory evidence. By recognizing and understanding these sources of bias, people can make more informed decisions and engage in more effective problem solving and critical thinking.

In conclusion, recognizing and addressing bias is an essential part of problem solving and critical thinking. Bias can come from many sources, including our own beliefs, cultural norms, and past experiences. Knowing the types of bias and strategies for identifying and addressing them can help us make informed decisions and better engage in critical thinking. Taking time to reflect on our own biases is also important for making unbiased decisions.

Ultimately, recognizing and addressing bias will improve our problem-solving and critical thinking skills.

Critical Path Analysis: A Comprehensive Guide

  • Critical Path Analysis: A Comprehensive Guide

Learn all about critical path analysis and how to use it as a problem solving and planning tool. This comprehensive guide covers everything from introduction to conclusion.

Preserving Natural Resources

  • Preserving Natural Resources

Learn how to protect natural resources and preserve the environment. Understand the importance of sustainable resource management and how it can help protect our planet.

Gantt Charting: A Primer for Problem Solving & Planning Techniques

  • Gantt Charting: A Primer for Problem Solving & Planning Techniques

Learn about Gantt Charting, a powerful tool for problem solving and planning techniques, with this easy to understand primer.

Reducing Costs and Increasing Profits: A Problem Solving Example

  • Reducing Costs and Increasing Profits: A Problem Solving Example

This article provides an example of how to reduce costs and increase profits. Discover tips and strategies to improve business performance.

  • How to Explain Ideas Clearly
  • Cost-benefit Analysis: A Guide to Making Informed Decisions
  • Reducing Air Pollution Levels
  • Negotiation and Compromise
  • Mind Mapping: A Creative Problem Solving Tool
  • Round-robin Brainstorming: A Creative Problem Solving Tool
  • Exploring Synectics Technique: A Comprehensive Guide
  • Brainstorming Solutions: A Problem-Solving Guide
  • Identifying Root Causes
  • Mind Mapping - Creative Problem Solving and Creative Thinking Techniques
  • SWOT Analysis: A Comprehensive Overview
  • Jigsaw Puzzles and Mazes: Problem Solving Activities for Fun and Learning
  • Cause and Effect Diagrams: A Problem-Solving Technique
  • Listening Skills: A Comprehensive Overview
  • Exploring Lateral Thinking: A Comprehensive Guide to Problem Solving Strategies
  • Finding Sustainable Energy Sources
  • Simple Decisions - An Overview
  • Collaborative Mind Mapping Software
  • Crossword Puzzles and Sudoku: A Problem-Solving Exploration

Design Thinking Techniques: A Comprehensive Overview

  • Using Analogies to Solve Problems
  • Five Whys Technique: A Comprehensive Analysis
  • Virtual Brainstorming Sessions: A Comprehensive Overview
  • Questioning Assumptions: A Critical Thinking Skill
  • Exploring Online Whiteboarding Tools for Brainstorming
  • Idea Generation Techniques: A Comprehensive Overview
  • Improving Customer Service Quality
  • Exploring Brainwalking: A Creative Problem-Solving Technique
  • Brainstorming: A Comprehensive Look at Creative Problem Solving
  • Analyzing Arguments Logically
  • Round-robin brainstorming: Exploring a Group Brainstorming Technique
  • Identifying Patterns: A Practical Guide
  • Force Field Analysis for Problem Solving and Decision Making
  • Value Innovation Techniques
  • Exploring the SCAMPER Technique for Creative Problem Solving
  • Thinking Outside the Box: An Overview of Individual Brainstorming Techniques
  • Data Collection and Analysis - Problem Solving Skills and Research Skills
  • Brainwriting: A Creative Problem-Solving Technique
  • Solving Relationship Issues
  • Maximizing Efficiency and Productivity
  • Exploring the Serendipity Technique of Creative Problem Solving
  • Choosing the Right Career: Problem-Solving Examples
  • Creative Writing Exercises and Storyboards
  • Analyzing Consequences: A Problem Solving Strategy
  • Brainwriting: A Group Brainstorming Technique
  • Paired Comparison Analysis: A Comprehensive Overview
  • Debate Activities and Role-Play Scenarios

Logic Puzzles and Brain Teasers: A Comprehensive Overview

  • Applying Existing Solutions for Problem Solving Strategies
  • Imagination Activities and Brainstorming Sessions
  • Visualization Activities and Drawing Exercises
  • Structured Decisions: An Overview of the Decision Making Process

Interpreting Information: A Problem-Solving and Research Skills Primer

  • Fault Tree Analysis: A Comprehensive Overview
  • Making Complex Decisions: A Comprehensive Overview
  • Word Association and Random Word Generation
  • Group Decision Making Activities

New Articles

Interpreting Information: A Problem-Solving and Research Skills Primer

Which cookies do you want to accept?

ScienceDaily

Can the bias in algorithms help us see our own?

Algorithms were supposed to make our lives easier and fairer: help us find the best job applicants, help judges impartially assess the risks of bail and bond decisions, and ensure that healthcare is delivered to the patients with the greatest need. By now, though, we know that algorithms can be just as biased as the human decision-makers they inform and replace.

What if that weren't a bad thing?

New research by Carey Morewedge, a Boston University Questrom School of Business professor of marketing and Everett W. Lord Distinguished Faculty Scholar, found that people recognize more of their biases in algorithms' decisions than they do in their own -- even when those decisions are the same. The research, publishing in the Proceedings of the National Academy of Sciences , suggests ways that awareness might help human decision-makers recognize and correct for their biases.

"A social problem is that algorithms learn and, at scale, roll out biases in the human decisions on which they were trained," says Morewedge, who also chairs Questrom's marketing department. For example: In 2015, Amazon tested (and soon scrapped) an algorithm to help its hiring managers filter through job applicants. They found that the program boosted résumés it perceived to come from male applicants, and downgraded those from female applicants, a clear case of gender bias.

But that same year, just 39 percent of Amazon's workforce were women. If the algorithm had been trained on Amazon's existing hiring data, it's no wonder it prioritized male applicants -- Amazon already was. If its algorithm had a gender bias, "it's because Amazon's managers were biased in their hiring decisions," Morewedge says.

"Algorithms can codify and amplify human bias, but algorithms also reveal structural biases in our society," he says. "Many biases cannot be observed at an individual level. It's hard to prove bias, for instance, in a single hiring decision. But when we add up decisions within and across persons, as we do when building algorithms, it can reveal structural biases in our systems and organizations."

Morewedge and his collaborators -- Begüm Çeliktutan and Romain Cadario, both at Erasmus University in the Netherlands -- devised a series of experiments designed to tease out people's social biases (including racism, sexism, and ageism). The team then compared research participants' recognition of how those biases colored their own decisions versus decisions made by an algorithm. In the experiments, participants sometimes saw the decisions of real algorithms. But there was a catch: other times, the decisions attributed to algorithms were actually the participants' choices, in disguise.

Across the board, participants were more likely to see bias in the decisions they thought came from algorithms than in their own decisions. Participants also saw as much bias in the decisions of algorithms as they did in the decisions of other people. (People generally better recognize bias in others than in themselves, a phenomenon called the bias blind spot.) Participants were also more likely to correct for bias in those decisions after the fact, a crucial step for minimizing bias in the future.

Algorithms Remove the Bias Blind Spot

The researchers ran sets of participants, more than 6,000 in total, through nine experiments. In the first, participants rated a set of Airbnb listings, which included a few pieces of information about each listing: its average star rating (on a scale of 1 to 5) and the host's name. The researchers assigned these fictional listings to hosts with names that were "distinctively African American or white," based on previous research identifying racial bias, according to the paper. The participants rated how likely they were to rent each listing.

In the second half of the experiment, participants were told about a research finding that explained how the host's race might bias the ratings. Then, the researchers showed participants a set of ratings and asked them to assess (on a scale of 1 to 7) how likely it was that bias had influenced the ratings.

Participants saw either their own rating reflected back to them, their own rating under the guise of an algorithm's, their own rating under the guise of someone else's, or an actual algorithm rating based on their preferences.

The researchers repeated this setup several times, testing for race, gender, age, and attractiveness bias in the profiles of Lyft drivers and Airbnb hosts. Each time, the results were consistent. Participants who thought they saw an algorithm's ratings or someone else's ratings (whether or not they actually were) were more likely to perceive bias in the results.

Morewedge attributes this to the different evidence we use to assess bias in others and bias in ourselves. Since we have insight into our own thought process, he says, we're more likely to trace back through our thinking and decide that it wasn't biased, perhaps driven by some other factor that went into our decisions. When analyzing the decisions of other people, however, all we have to judge is the outcome.

"Let's say you're organizing a panel of speakers for an event," Morewedge says. "If all those speakers are men, you might say that the outcome wasn't the result of gender bias because you weren't even thinking about gender when you invited these speakers. But if you were attending this event and saw a panel of all-male speakers, you're more likely to conclude that there was gender bias in the selection."

Indeed, in one of their experiments, the researchers found that participants who were more prone to this bias blind spot were also more likely to see bias in decisions attributed to algorithms or others than in their own decisions. In another experiment, they discovered that people more easily saw their own decisions influenced by factors that were fairly neutral or reasonable, such as an Airbnb host's star rating, compared to a prejudicial bias, such as race -- perhaps because admitting to preferring a five-star rental isn't as threatening to one's sense of self or how others might view us, Morewedge suggests.

Algorithms as Mirrors: Seeing and Correcting Human Bias

In the researchers' final experiment, they gave participants a chance to correct bias in either their ratings or the ratings of an algorithm (real or not). People were more likely to correct the algorithm's decisions, which reduced the actual bias in its ratings.

This is the crucial step for Morewedge and his colleagues, he says. For anyone motivated to reduce bias, being able to see it is the first step. Their research presents evidence that algorithms can be used as mirrors -- a way to identify bias even when people can't see it in themselves.

"Right now, I think the literature on algorithmic bias is bleak," Morewedge says. "A lot of it says that we need to develop statistical methods to reduce prejudice in algorithms. But part of the problem is that prejudice comes from people. We should work to make algorithms better, but we should also work to make ourselves less biased.

"What's exciting about this work is that it shows that algorithms can codify or amplify human bias, but algorithms can also be tools to help people better see their own biases and correct them," he says. "Algorithms are a double-edged sword. They can be a tool that amplifies our worst tendencies. And algorithms can be a tool that can help better ourselves."

  • Racial Issues
  • Consumer Behavior
  • Gender Difference
  • Computer Programming
  • Neural Interfaces
  • Computers and Internet
  • List of cognitive biases
  • Social cognition
  • Illusion of control
  • Microeconomics
  • Cognitive bias
  • Macroeconomics
  • Anchoring bias in decision-making

Story Source:

Materials provided by Boston University . Original written by Molly Callahan. Note: Content may be edited for style and length.

Journal Reference :

  • Begum Celiktutan, Romain Cadario, Carey K. Morewedge. People see more of their biases in algorithms . Proceedings of the National Academy of Sciences , 2024; 121 (16) DOI: 10.1073/pnas.2317602121

Cite This Page :

Explore More

  • Genes for Strong Muscles: Healthy Long Life
  • Brightest Gamma-Ray Burst
  • Stellar Winds of Three Sun-Like Stars Detected
  • Fences Causing Genetic Problems for Mammals
  • Ozone Removes Mating Barriers Between Fly ...
  • Parkinson's: New Theory On Origins and Spread
  • Clash of Stars Solves Stellar Mystery
  • Secure Quantum Computing at Home
  • Ocean Currents: Collapse of Antarctic Ice ...
  • Pacific Cities Much Older Than Previously ...

Trending Topics

Strange & offbeat.

IMAGES

  1. Selection Bias: Its Types & How to Avoid it [With Infographic]

    selection bias in critical thinking

  2. What is Selection Bias

    selection bias in critical thinking

  3. Selection Bias in Research: Types, Examples & Impact

    selection bias in critical thinking

  4. case study on selection bias

    selection bias in critical thinking

  5. Cultivating an Underwriter’s Competency: The role of heuristics and

    selection bias in critical thinking

  6. Cognitive and Unconscious Bias: What It Is and How to Overcome It

    selection bias in critical thinking

VIDEO

  1. Breaking the Bias: Understanding Confirmation Bias in Decision-Making

  2. Confirming you were right! #bias #confirmationbias #shorts

  3. Critical Thinking Concepts: Status Quo Bias

  4. Selection Bias Example 2

  5. Steering Clear of the Trap (Understanding Hindsight Bias)

  6. Ad Hominem Fallacy Examples: Beware Manipulative Tactics! [Studies, Stories, Examples, Tips&Tricks]

COMMENTS

  1. What Is Selection Bias?

    Revised on May 1, 2023. Selection bias refers to situations where research bias is introduced due to factors related to the study's participants. Selection bias can be introduced via the methods used to select the population of interest, the sampling methods, or the recruitment of participants. It is also known as the selection effect.

  2. Critical thinking

    Teaching bias and critical thinking skills. By following this step-by-step process, I believe we can talk about bias with our students and increase the chances of them incorporating critical thinking skills into their lives. 1) Choose a bias. Search for a list of biases and read the basic definitions. 2) Learn about it.

  3. What Is Cognitive Bias? Types & Examples

    Confirmation bias, hindsight bias, mere exposure effect, self-serving bias, base rate fallacy, anchoring bias, availability bias, the framing effect , inattentional blindness, and the ecological fallacy are some of the most common examples of cognitive bias. Another example is the false consensus effect. Cognitive biases directly affect our ...

  4. 2.2 Overcoming Cognitive Biases and Engaging in Critical Reflection

    Confirmation Bias. One of the most common cognitive biases is confirmation bias, which is the tendency to search for, interpret, favor, and recall information that confirms or supports your prior beliefs.Like all cognitive biases, confirmation bias serves an important function. For instance, one of the most reliable forms of confirmation bias is the belief in our shared reality.

  5. Identifying and Avoiding Bias in Research

    Abstract. This narrative review provides an overview on the topic of bias as part of Plastic and Reconstructive Surgery 's series of articles on evidence-based medicine. Bias can occur in the planning, data collection, analysis, and publication phases of research. Understanding research bias allows readers to critically and independently review ...

  6. Cognitive Bias Is the Loose Screw in Critical Thinking

    People cannot think critically unless they are aware of their cognitive biases, which can alter their perception of reality. Cognitive biases are mental shortcuts people take in order to process ...

  7. Selection Bias: Definition & Examples

    Study bias occurs when there are flaws or errors in the design or implementation of a study, leading to inaccurate or biased results. This type of selection bias can occur at any stage of the study, including the methods, measurement of variables, analysis of data, and interpretation of results. Study bias can produce false conclusions that can ...

  8. Selection Bias: What it is, Types & Examples

    It arises when the participant pool or data does not represent the target group. A significant cause of selection bias is when the researcher fails to consider subgroup characteristics. It causes fundamental disparities between the sample data variables and the research population. Selection bias arises in research for several reasons.

  9. Bias

    This can manifest in a number of ways with Hahn and Harris (2014) suggesting four main behaviours: A tendency to be overconfident with the validity of our held beliefs. Peters (2020) also suggests that we're more likely to remember information that supports our way of thinking, further cementing our bias.

  10. LibGuides: Critical Thinking & Evaluating Information: Bias

    Confirmation Bias - "Originating in the field of psychology; the tendency to seek or favour new information which supports one's existing theories or beliefs, while avoiding or rejecting that which disrupts them." Addition of definition to the Oxford Dictionary in 2019. "confirmation, n." OED Online, Oxford University Press, December 2020 ...

  11. Selection Bias

    1. Inaccurate Findings. Selection bias can lead to findings that are not representative of the target population, resulting in inaccurate estimates of effects or associations between variables. 2. Lack of Generalizability. When the sample chosen for a study does not reflect the larger population, the findings cannot be easily extrapolated or ...

  12. What is Selection Bias? (And How to Defeat it)

    Learn how to defeat selection bias and ensure reliable and accurate results in your studies. ... While this makes for a great example of lateral thinking, it also tells us something critical about data collection - that of selection bias. ... or perhaps wishful thinking on behalf of the investigator. Ultimately though, this leads to bad ...

  13. Insights and Pitfalls: Selection Bias in Qualitative Research

    Abstract. Qualitative analysts have received stern warnings that the validity of their studies may be undermined by selection bias. This article provides an overview of this problem for qualitative researchers in the field of international and comparative studies, focusing on selection bias that may result from the deliberate selection of cases ...

  14. Critical Thinking and Cognitive Bias

    Keywords: cognitive bias, critical thinking, metacognition, pedagogy. 1. The problem. Developing critical thinking skill is a central educational aim across the curriculum. Critical thinking, it is hoped, perhaps un-like the specifics of course content, is durable and portable.1. 1 The portability of critical thinking has been challenged ...

  15. Bias and Critical Thinking

    Here, we show three different perspective in order to enable a more reflexive understanding of bias. The first is the understanding how different forms of biases relate to design criteria of scientific methods. The second is the question which stage in the application of methods - data gathering, data analysis, and interpretation of results ...

  16. What is Selection Bias

    What are the Types of Selection Bias? There are several types of selection bias that can occur in research and data analysis: Self-Selection Bias. This bias occurs when individuals self-select to be part of a study or sample. It can lead to a non-random sample that may not represent the broader population accurately.

  17. 16 Selection Bias Examples (2024)

    16 Selection Bias Examples. Selection bias occurs when the sample being studied is not representative of the population from which the sample was drawn, leading to skewed or misleading results (Walliman, 2021). In these situations, the sample under study deviates from a fair, random, and equitable selection process.

  18. The Critical Thinking Path to Controlling Cognitive Biases

    Cognitive biases restrict and even prevent the acceptance of new information, but the introduction of critical thinking may help control bias. The subject is a case study of how educators with adult students use critical thinking to control cognitive bias. The topic of cognitive bias is well researched, with over 150 different types of bias identified and insights into how instructors' biases ...

  19. Cognitive Bias List: 13 Common Types of Bias

    The Availability Heuristic. The Optimism Bias. Other Kinds. Although we like to believe that we're rational and logical, the fact is that we are continually under the influence of cognitive biases. These biases distort thinking, influence beliefs, and sway the decisions and judgments that people make each and every day.

  20. Selection Bias

    Demystifying The Definition Of Selection Bias. Unconscious bias, also known as cognitive bias, is a way in which our minds take shortcuts while processing information. Our decision-making and critical thinking skills are compromised as we jump to conclusions. Unconscious bias can affect workplaces and organizations to a great extent—making ...

  21. A Framework for Understanding Selection Bias in Real-World Healthcare Data

    how di erent sources of selection bias may a ect their parameter estimates of interest. We review four easy-to-implement weighting approaches to reduce selection bias and explain through a simulation study when they can rescue us in practice with analysis of real world data. We provide annotated R codes to implement these methods.

  22. Recognizing Bias: A Problem Solving and Critical Thinking Skills Guide

    Sources of Bias. Recognizing bias is an essential part of problem solving and critical thinking. It is important to be aware of potential sources of bias, such as personal opinions, values, or preconceived notions. Bias can have a profound effect on decisions, leading to outcomes that are not based on facts or evidence.

  23. Can the bias in algorithms help us see our own?

    Morewedge attributes this to the different evidence we use to assess bias in others and bias in ourselves. Since we have insight into our own thought process, he says, we're more likely to trace ...