• Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Guided Meditations
  • Verywell Mind Insights
  • 2023 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

How to Write a Great Hypothesis

Hypothesis Format, Examples, and Tips

Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

good behavior hypothesis

Amy Morin, LCSW, is a psychotherapist and international bestselling author. Her books, including "13 Things Mentally Strong People Don't Do," have been translated into more than 40 languages. Her TEDx talk,  "The Secret of Becoming Mentally Strong," is one of the most viewed talks of all time.

good behavior hypothesis

Verywell / Alex Dos Diaz

  • The Scientific Method

Hypothesis Format

Falsifiability of a hypothesis, operational definitions, types of hypotheses, hypotheses examples.

  • Collecting Data

Frequently Asked Questions

A hypothesis is a tentative statement about the relationship between two or more  variables. It is a specific, testable prediction about what you expect to happen in a study.

One hypothesis example would be a study designed to look at the relationship between sleep deprivation and test performance might have a hypothesis that states: "This study is designed to assess the hypothesis that sleep-deprived people will perform worse on a test than individuals who are not sleep-deprived."

This article explores how a hypothesis is used in psychology research, how to write a good hypothesis, and the different types of hypotheses you might use.

The Hypothesis in the Scientific Method

In the scientific method , whether it involves research in psychology, biology, or some other area, a hypothesis represents what the researchers think will happen in an experiment. The scientific method involves the following steps:

  • Forming a question
  • Performing background research
  • Creating a hypothesis
  • Designing an experiment
  • Collecting data
  • Analyzing the results
  • Drawing conclusions
  • Communicating the results

The hypothesis is a prediction, but it involves more than a guess. Most of the time, the hypothesis begins with a question which is then explored through background research. It is only at this point that researchers begin to develop a testable hypothesis. Unless you are creating an exploratory study, your hypothesis should always explain what you  expect  to happen.

In a study exploring the effects of a particular drug, the hypothesis might be that researchers expect the drug to have some type of effect on the symptoms of a specific illness. In psychology, the hypothesis might focus on how a certain aspect of the environment might influence a particular behavior.

Remember, a hypothesis does not have to be correct. While the hypothesis predicts what the researchers expect to see, the goal of the research is to determine whether this guess is right or wrong. When conducting an experiment, researchers might explore a number of factors to determine which ones might contribute to the ultimate outcome.

In many cases, researchers may find that the results of an experiment  do not  support the original hypothesis. When writing up these results, the researchers might suggest other options that should be explored in future studies.

In many cases, researchers might draw a hypothesis from a specific theory or build on previous research. For example, prior research has shown that stress can impact the immune system. So a researcher might hypothesize: "People with high-stress levels will be more likely to contract a common cold after being exposed to the virus than people who have low-stress levels."

In other instances, researchers might look at commonly held beliefs or folk wisdom. "Birds of a feather flock together" is one example of folk wisdom that a psychologist might try to investigate. The researcher might pose a specific hypothesis that "People tend to select romantic partners who are similar to them in interests and educational level."

Elements of a Good Hypothesis

So how do you write a good hypothesis? When trying to come up with a hypothesis for your research or experiments, ask yourself the following questions:

  • Is your hypothesis based on your research on a topic?
  • Can your hypothesis be tested?
  • Does your hypothesis include independent and dependent variables?

Before you come up with a specific hypothesis, spend some time doing background research. Once you have completed a literature review, start thinking about potential questions you still have. Pay attention to the discussion section in the  journal articles you read . Many authors will suggest questions that still need to be explored.

To form a hypothesis, you should take these steps:

  • Collect as many observations about a topic or problem as you can.
  • Evaluate these observations and look for possible causes of the problem.
  • Create a list of possible explanations that you might want to explore.
  • After you have developed some possible hypotheses, think of ways that you could confirm or disprove each hypothesis through experimentation. This is known as falsifiability.

In the scientific method ,  falsifiability is an important part of any valid hypothesis.   In order to test a claim scientifically, it must be possible that the claim could be proven false.

Students sometimes confuse the idea of falsifiability with the idea that it means that something is false, which is not the case. What falsifiability means is that  if  something was false, then it is possible to demonstrate that it is false.

One of the hallmarks of pseudoscience is that it makes claims that cannot be refuted or proven false.

A variable is a factor or element that can be changed and manipulated in ways that are observable and measurable. However, the researcher must also define how the variable will be manipulated and measured in the study.

For example, a researcher might operationally define the variable " test anxiety " as the results of a self-report measure of anxiety experienced during an exam. A "study habits" variable might be defined by the amount of studying that actually occurs as measured by time.

These precise descriptions are important because many things can be measured in a number of different ways. One of the basic principles of any type of scientific research is that the results must be replicable.   By clearly detailing the specifics of how the variables were measured and manipulated, other researchers can better understand the results and repeat the study if needed.

Some variables are more difficult than others to define. How would you operationally define a variable such as aggression ? For obvious ethical reasons, researchers cannot create a situation in which a person behaves aggressively toward others.

In order to measure this variable, the researcher must devise a measurement that assesses aggressive behavior without harming other people. In this situation, the researcher might utilize a simulated task to measure aggressiveness.

Hypothesis Checklist

  • Does your hypothesis focus on something that you can actually test?
  • Does your hypothesis include both an independent and dependent variable?
  • Can you manipulate the variables?
  • Can your hypothesis be tested without violating ethical standards?

The hypothesis you use will depend on what you are investigating and hoping to find. Some of the main types of hypotheses that you might use include:

  • Simple hypothesis : This type of hypothesis suggests that there is a relationship between one independent variable and one dependent variable.
  • Complex hypothesis : This type of hypothesis suggests a relationship between three or more variables, such as two independent variables and a dependent variable.
  • Null hypothesis : This hypothesis suggests no relationship exists between two or more variables.
  • Alternative hypothesis : This hypothesis states the opposite of the null hypothesis.
  • Statistical hypothesis : This hypothesis uses statistical analysis to evaluate a representative sample of the population and then generalizes the findings to the larger group.
  • Logical hypothesis : This hypothesis assumes a relationship between variables without collecting data or evidence.

A hypothesis often follows a basic format of "If {this happens} then {this will happen}." One way to structure your hypothesis is to describe what will happen to the  dependent variable  if you change the  independent variable .

The basic format might be: "If {these changes are made to a certain independent variable}, then we will observe {a change in a specific dependent variable}."

A few examples of simple hypotheses:

  • "Students who eat breakfast will perform better on a math exam than students who do not eat breakfast."
  • Complex hypothesis: "Students who experience test anxiety before an English exam will get lower scores than students who do not experience test anxiety."​
  • "Motorists who talk on the phone while driving will be more likely to make errors on a driving course than those who do not talk on the phone."

Examples of a complex hypothesis include:

  • "People with high-sugar diets and sedentary activity levels are more likely to develop depression."
  • "Younger people who are regularly exposed to green, outdoor areas have better subjective well-being than older adults who have limited exposure to green spaces."

Examples of a null hypothesis include:

  • "Children who receive a new reading intervention will have scores different than students who do not receive the intervention."
  • "There will be no difference in scores on a memory recall task between children and adults."

Examples of an alternative hypothesis:

  • "Children who receive a new reading intervention will perform better than students who did not receive the intervention."
  • "Adults will perform better on a memory task than children." 

Collecting Data on Your Hypothesis

Once a researcher has formed a testable hypothesis, the next step is to select a research design and start collecting data. The research method depends largely on exactly what they are studying. There are two basic types of research methods: descriptive research and experimental research.

Descriptive Research Methods

Descriptive research such as  case studies ,  naturalistic observations , and surveys are often used when it would be impossible or difficult to  conduct an experiment . These methods are best used to describe different aspects of a behavior or psychological phenomenon.

Once a researcher has collected data using descriptive methods, a correlational study can then be used to look at how the variables are related. This type of research method might be used to investigate a hypothesis that is difficult to test experimentally.

Experimental Research Methods

Experimental methods  are used to demonstrate causal relationships between variables. In an experiment, the researcher systematically manipulates a variable of interest (known as the independent variable) and measures the effect on another variable (known as the dependent variable).

Unlike correlational studies, which can only be used to determine if there is a relationship between two variables, experimental methods can be used to determine the actual nature of the relationship—whether changes in one variable actually  cause  another to change.

A Word From Verywell

The hypothesis is a critical part of any scientific exploration. It represents what researchers expect to find in a study or experiment. In situations where the hypothesis is unsupported by the research, the research still has value. Such research helps us better understand how different aspects of the natural world relate to one another. It also helps us develop new hypotheses that can then be tested in the future.

Some examples of how to write a hypothesis include:

  • "Staying up late will lead to worse test performance the next day."
  • "People who consume one apple each day will visit the doctor fewer times each year."
  • "Breaking study sessions up into three 20-minute sessions will lead to better test results than a single 60-minute study session."

The four parts of a hypothesis are:

  • The research question
  • The independent variable (IV)
  • The dependent variable (DV)
  • The proposed relationship between the IV and DV

Castillo M. The scientific method: a need for something better? . AJNR Am J Neuroradiol. 2013;34(9):1669-71. doi:10.3174/ajnr.A3401

Nevid J. Psychology: Concepts and Applications. Wadworth, 2013.

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

aclogo_icon_white

Special Educator Academy

Free resources, ep. 13: how to write useful fba hypothesis statements.

How to write useful and meaningful FBA hypothesis statements like a pro with a free download

Sharing is caring!

Welcome back and I am so glad that you have joined us again. We are talking about behavior, which I know is an issue for many of us in special education classrooms. I am Chris Reeve, I’m your host and up to now we’ve taken our data, we have gathered all of our information and today we’re going to start getting to the good stuff because we’re starting to get to the point where we’re going to look at why in the world is this behavior happening in the first place and what are we going to do about it

You also will see a number of visual examples that I obviously cannot give you on a podcast, so that may make it a little bit easier. So you can go to the blog post and you can see all the different examples of hypothesis statements, download the template and you’ll also be able to download a transcript or read this if you would rather make sense of it that way. It gets a little interesting when I start to talk about these things without any visuals, because you know how I love visuals. So let me give you just a quick disclaimer as well.

SYNTHESIZE FBA INFORMATION

I’m going to give you hypothesis statements in this podcast and I’m going to kind of give you a summary of the information about the student’s behavior. It’s going to sound like those instances came from one instance, but they didn’t.

We have to triangulate all of our information, our information from staff, our information from families, our data collection, our record review, all the things we’ve talked about up until this point are going to go into that hypothesis statement. So they are all very important and I’m going to pick up from where we’ve triangulated all that information. We’ve got some idea about some setting events, we’ve seen what happens before, we’ve seen what happens afterwards and put it in kind of a compilation. So it isn’t as easy as I make it sound because as I often say, human behavior is just not simple. But when you just hear me talk about it, the cases kind of sound like I’m just picking out one instance. I’m not picking out a single episode of behavior, I’m using a composite of all the different information. So let’s get started.

BUILDING EFFECTIVE HYPOTHESIS STATEMENTS

We’re now moving into step 3 in our 5-step process of meaningful behavioral support and that is really developing our hypothesis statements. Now keep in mind that a hypothesis is a best guess. We don’t know that this is what’s actually driving the behavior until we confirm our hypothesis and I’ll be talking about that later in the series and how we can do that. Because you can do it when you develop them and you can test them more likely you will develop interventions that address them and see if they work when we’re within a school setting.

We want to make sure that when we are developing our hypothesis statements that we are clearly tying them to the data that we’re not getting lost in our interviews and things like that. We want to make sure that we’re accounting for that interview and that less objective information, but that we are making sure that our data is solidly supporting our hypothesis. That’s why we took it.

FREE DOWNLOAD OF GRAPHIC ORGANIZER

Writing our hypothesis statements is critical to the success of the intervention plan because they should lead you to what your behavioral solutions are going to be and in the blog post that goes with today’s podcast, you will find a download that you can get that actually structures your hypothesis statements.

One of the things that I like about using this hypothesis statement structure is I can take my antecedent information and my setting event information and put it in one block. Then my behavior goes in the next block and how the environment is responding or what’s happening in the environment comes afterwards. So it’s very easy to take my ABCs and translate them into this. I can then take this set up and say, when this happens he is likely to engage in this behavior and in the environment this commonly happens if that’s what my data tech trends are telling me.

That then allows me to take those antecedents and make adjustments to the environment so that we can prevent the behavior from happening. It allows me to know if he start off with smaller behaviors, that should be an indicator to me that something bigger is coming, then I should intervene earlier and it lets me know what do we need to change about how we react or respond to the behavior or what’s happening in the environment after the behavior so that we can reduce the reinforcement to it. And all of that gets directly mapped onto the hypothesis from the hypothesis statement. So go to autismclassroomresources/episode13 and download the hypothesis statement graphic and it will walk you through how to put that together. And you can also download a transcript and you can also read this post if you’d rather do that rather than listening.

WHAT GOES INTO A HYPOTHESIS STATEMENT

So let’s talk for a few minutes about what goes into your hypothesis statement.

SETTING EVENTS

One is the setting events. So that leads us to how do we eliminate or reduce the impact of distant factors that might influence the behavior. So our setting events in our hypothesis. He is more likely to engage in this behavior when X, Y, and Z, tell us we need to address X, Y, and Z in some way. Now as we’ve said in  episode 11  we cannot always make X, Y and Z go away. If I could make him sleep through the night, I’d do it.

But I do know that maybe I can modify what I ask you to do on a day when you didn’t sleep well at night, or a day that you don’t feel well, or a day that you didn’t take your medicine. Maybe I modify my demands. Maybe I have you participate in group activities less. And that’s where that brainstorming process that we’ll talk about when we get to intervention plans becomes really key. But your setting events are going to tell you what you need to try to accommodate for if you cannot change it.

ANTECEDENTS

Your antecedents are going to lead us to to know exactly how to restructure the environment to prevent the behavior.

Our behavior tells us whether or not the form of the behavior is relevant to the function. So does he only scream and get attention, but when he hits people ignore him. Probably what happened the other way around, but it is the form related to the function. Most of the time, it’s not in my experience, but it is possible that you will have a student that engages one kind of behavior. Because people may come to him sometimes and another type of behavior because it gets people to go away.

CONSEQUENCES

The consequence tells us what might be maintaining the behavior. So we need to know how we need to change our response to try to prevent the behavior from increasing over time. So when we use the graphic organizer for the hypothesis statements, we have three boxes. When the student and we fill that in, he will. That’s the behavior. And as a result this happens. And the setting events kind of go over that. So when this situation is in place, when this student does this or encounters this, he engages in this behavior and this is what happens in the environment.

FBA HYPOTHESES STATEMENT EXAMPLES

So to give you an example of a hypothesis, when the student, so when faced with situations with social or academic demands, particularly those involving language. So very specific. I’ve been able to take my data and say this almost always happens in situations with social or academic demands, so not other kinds of demands. And those that involve making him practice language related tasks are much more likely to have problem behaviors. The behavior is when faced with those situations, he sometimes, because it’s not every single time hits, screams and or bites others, and then what happens as a result, he is sometimes removed from the situation, the task is delayed by the behavior or staff provides assistance in completing the task. And those are all consequences that often differ based on what situation he’s in and what setting he’s in and things like that, but they were common consequences to this behavior that basically kept him from having to do the activity or delayed it in some way.

INTEGRATE WHAT YOU’VE LEARNED

Now that’s a whole lot more descriptive than a function that just says he engages in this behavior to escape. Because now I know when he’s faced with situations with social or academic demands. in particular, those involving language, we need to maybe include more easy tasks in with our heart, with our language demands. We need to give him, maybe, more breaks during that time.

We know what his behavior is and he does a constellation of behavior. There’s not one specific form of behavior related to this situation and then we need to give him a way to replace this because it is an escape. We need to give him a way to ask for a break, because the result of his behavior is having to be removed or having the task be delayed. It’s essentially escape related. So we want to make sure that we’ve got a replacement behavior that focuses on that. And we will talk about a in a whole episode about replacement behaviors because they aren’t often what many people think they are. But back to task so you can see how that gives me much more specific information about where I’m going to address my behavior intervention plan.

MORE SPECIFIC

Now I may get even more specific. I may say something like…

James appears to engage in challenging behavior to escape from tasks that are difficult for him. Some of these tasks are work-related. Some may be overwhelming or difficult socially, and some may be things that are frustrating for him like waiting. Engaging in significant challenging behaviors serves to gain assistance or removal from these situations effectively.

You may also have,

James sometimes engages in challenging behavior to protest or express frustration about what not being allowed to have something that he wants.

BEHAVIOR OFTEN HAS MULTIPLE FUNCTIONS

So we know what situations he’s likely to have the problem in. And we also know that his behavior is complex. And you’ve heard me say this throughout this series. Human behavior isn’t simple. And rarely except in very young children occasionally, but rarely ,do we see behavior serving only one function very frequently. We see it having maybe a main function, but also another function.

So often we will see a student who engages in behavior to escape. But when you give him just a break where nobody interacts with them, you continue to see problems because that behavior was also to get attention. So it got him out of the task and it got people engaging with him together. So never think when you’re writing your hypothesis statements that you have to be limited to one function. We will have to pick what we’re going to do when we get to the behavior plan based on that. So our setting events factor into the  “When the student..”  section of the hypothesis and they help us explain why behaviors happen on one day in relation to an antecedent and on another day they don’t.

COMPLEX PROBLEMS HAVE COMPLEX HYPOTHESIS STATEMENTS

So James’ data indicated that the behaviors occurred on some days and not on others. And further investigation into the data showed us that days on which he hadn’t had his medicine were more likely to result in challenging behavior. One solution: make sure he always takes his medicine. We may be able to do that. And I’ve certainly had students that we’ve said, “You know what? Send his medicine to school. We’re happy to give it to him first thing in the morning if they’re having a hard time getting him to take it”. Sometimes even at school, James wouldn’t necessarily take his medicine. He put it in his mouth, he spit it out. Twenty minutes later we’d find out he hadn’t taken it. So, another solution factored into his program and the hypothesis statements:

On days when James has not had his medication, he is and he is presented with a language task, he is likely to engage in these behaviors which then result in being removed from the task.

So maybe on the days when we knew he hasn’t taken his medicine, we adjust our demands so that we might lower that antecedent that sets that behavior off.

MORE EXAMPLES OF HYPOTHESIS STATEMENTS

So let’s look at a few other examples for different kinds of functions.

ESCAPE FUNCTION HYPOTHESIS STATEMENT

So let’s look at Sammy. And Sammy’s data, one of his instances is when has been to more group activities during a day. These behaviors are more likely to occur when he checks his schedule and sees the teacher icon, he falls on the ground and screams. Sometimes he does this when he transitions out of the room for assembly and group activities. So this is kind of my summary of what we see in his data. Sammy screams and cries when the staff tries to redirect him, he screams louder. If given the opportunity to go to a quiet area and calm down, he stopped screaming and he’s calm and the outcome is his staff moves him to the work table or the upcoming activity. His behavior continues.

So that tells us that when we look at Sammie’s behavior,

Sammy appears to engage in challenging behaviors to escape from tasks that are difficult for him. Some of these tasks are work-related, some may be overwhelming or difficult socially and some may be things that are frustrating for him like waiting. Engaging in significant challenging behavior serves to gain assistance or removal from these situations effectively. Sammy is more likely to engage in these behaviors when he’s had a lot of group work during the day.

So I put my setting event kind of at the end of that one. But you can see it’s obviously an escape from work and social situations that is the real underlying function. But I now know that there are certain tasks that I need to adjust to prevent the behaviors. I can teach him a way to escape appropriately as a replacement behavior.

And my outcome needs to be that the behavior doesn’t get him out of the task as quickly as the replacement behavior. And we’ll talk about all of that more when we talk about behavior support plans. But I want you to understand how it all lines up.

ESCAPING FROM WHAT?

Let’s think about Simon. Simon has had several instances talking to his friends in the atrium of the high school. And suddenly in the middle of the conversation he started telling the other kids that he was going to kill them. tThe other kids left him alone and went to tell the teacher. So let’s think about the function for Simon or the hypothesis…

When presented with unstructured social interactions, which is when Simon is in the atrium of the school, there’s nobody there setting up interactions. Simon’s violent threats have been successful in extricating him from the social situation and escaping from the social demands.

So what we’re saying is that he is trying to escape social situations. Unstructured social situations set him up to have these behaviors, and this is a very efficient manner of getting people to leave him alone. So I now know that maybe I need to structure his social interactions a little bit more. I need to teach him a better way to get people to leave him alone more appropriately. And then we need to work on probably some underlying social skills as part of that as well.

ESCAPE FUNCTION WITH SETTING EVENT

Let’s look at Jimmy. Jimmy was playing with the other kids on the playground and they were playing horse with the basketball and when it was Jimmy’s Turney missed the basket. The other kids told him he got the letter S and the teacher, her, Bobby, tell him better luck next time and slap him on the back. Okay, very common. Hey, I’m trying to make you feel better kind of activity kind of behavior. Jimmy then hit Bobby and they got into a fight. When the playground supervisor asked what happened, Jimmy told her Bobby was bullying him. When we looked at Jimmy’s data, we found a large pattern of difficulty in social situations as the antecedent and that he was interpreting the perspectives when we talked to other kids that he was accusing of bullying him or fighting with them. He would tell them that, that they had done something.

And all of the things that he described were things that, from the perspective of the person who did them, were meant to be supportive, not problematic. So in knowing Jimmy and everything we know about Jimmy, we know that Jimmy has significant difficulty interpreting the perspectives of others and therefore understanding their intentions in his environment. He frequently interprets their behavior as a negative action toward himself. So….

When presented with an action, he interprets it negatively and he responds in a way to escape from that situation.

So he gets removed from the situation because he’s fighting. It gets him removed from the difficult situation. And so we’ve got an escape from social situations, but there’s an underlying setting event of not understanding the perspectives of other people.

And this is something we see a lot with our students with autism, that social piece is a big piece, but it’s also something I see a lot with students who have other types of disabilities other than autism where people aren’t necessarily picking up on the social thinking and the social perspective piece of it because they don’t have that diagnosis. So keep that in mind as we’re working with some of our students with emotional disturbances and things like that.

TANGIBLE SEEKING FUNCTION

Let’s look at two more. It’s time for Jimmy to be doing some math seat work and instead he gets up and he runs to the computer, he sits down and when the teacher tries to move him back to his desk, he throws himself on the floor and kicks her. So in this case we’ve got a kid who clearly wants something that he can’t have. It’s time to do work. And so he’s going to that thing that he wants and he’s behaving this way until it ends up being his term. So we’ve got an obtaining function of a tangible item.

Jimmy is highly interested in the computer when presented with a situation in which he has to wait his turn on the computer, he falls on the floor and kicks and screams until it is his turn.

AUTOMATIC FUNCTION HYPOTHESIS STATEMENT

Now let’s look at one has an automatic function because I think that’s a really hard one to focus on. Abe engages in a variety of repetitive movements throughout the day, including hitting his forehead and head with his hand. He will engage in these behaviors when there are no demands and there is no one to attend to him. These behaviors appear more frequently during downtime and appear to provide some type of internal reinforcement. So they occur more likely when people are not around and the staff report that he seems calmer after he hits himself. tTt’s kind of a summary of Abe. Our automatic reinforcement hypothesis might be… when asked to wait or left to work independently or without someone specifically engaging him.

Because remember, we can only have an automatic function if it would happen when nothing else is there and no one is around because that means there are no other factors.That’s the way we rule it out. It can’t simply be, we don’t know what the function is. So we think it’s automatic. It’s automatic, which some people call a sensory function. I think that’s a little misleading. And I talk about all of that in our  episode on functions,  which I’ll link in the show notes, but we really want to make sure that our antecedent is that he’s kind of left alone with nothing to do.

The behavior is that he frequently hits his head with his fist and following this behavior, his demeanor appears calmer. If stopped, he’ll begin to hit himself harder and scream. So that’s kind of our consequence for that behavior. So our hypothesis might be…

Abe engages in a variety of repetitive movements throughout the day, including hitting his forehead and head with his hand. He will engage in these behaviors when there are no demands and there is no one around to attend to him.   These behaviors appear more frequently during downtime and appear to provide some type of internal reinforcement. His demeanor appears calmer after completing them.

So that tells us that if we lead Abe alone, we need to give him something to do that he will engage with because not having that is going to be a trigger for the automatic self injury behavior. We know that when he does this, we need to engage him in something so that the behaviors decrease rather than simply trying to stop him. So this then leads us to what our behavior support plan is.

HYPOTHESIS STATEMENT DO AND DON’TS

So I want to finish just with a few do’s and don’ts about hypothesis statements. You want to make sure that you do include as much information as possible. I realized that when I talk about hypothesis statements, some people will think that they’re kind of wordy, but I find that wordy to be a good summary of the function of the behavior that can lead us directly into our behavior support plan. And I’ll talk in our next episode of how we do that.

How you write the hypothesis statements for your functional behavior assessment is critical to how strong your behavior support plan will be.

HYPOTHESIS STATEMENT DOS

Do: only describe what you can see and observe..

And we talked about that when we talked about  the data collection . And so I’ll link to that episode. But earlier in the series we’ve talked about the fact that if I can’t see it, I don’t know that it’s happened and so I really have to focus on the behaviors that I see.

DO: INCLUDE SETTING EVENTS

You want to make sure that we do include our setting events into our hypothesis statements because they are things we’re going to have to address in our behavior support plan.

DO: VERIFY HYPOTHESIS STATEMENTS

And so one thing that we can do is set up a situation similar to the thing that we think is setting off and reinforcing the behavior and see if it happens. So if the behavior is not self-injurious or really dangerous, then we could actually set up situations, take data and see if the behavior occurs in the situations that we think that they do.

DO: DEVELOP HYPOTHESIS STATEMENTS TIED TO OUR DATA

Another thing that we can do is develop a behavior support plan that we know is tightly tied to our hypotheses and take data to see whether or not the behavior continues. If it does continue that then confirms our hypothesis. If it does continue, then it tells us we need to go back and re look at our hypothesis. So we can use our intervention as our way to verify our hypotheses. But it’s critical when we do that that we make sure that our hypothesis statements and our behavior support plans are very tightly linked. And this format that you can download on the blog page actually will give you that linkage.

HYPOTHESIS STATEMENT DON’TS

So let’s talk about some things you shouldn’t do with your hypothesis statements.

DON’T GET DISTRACTED BY THE FORM OF BEHAVIOR

Don’t get misled by the form of the behavior. In other words, don’t assume that because somebody is biting or eating things that they’re not supposed to have, that it is an automatic reinforcer. Those behaviors can have outward impacts on an antecedent as well. So just because it involves a sense does not mean it’s a sensory function.

DON’T ASSUME FUNCTIONS.

I think a lot of times we assume the automatic and function or the sensory function because we can’t see what the pattern is. But that’s not really a valid way to make that decision as I’ve talked about earlier.

DON’T ASSUME THAT A BEHAVIOR HAS ONLY ONE FUNCTION.

Very frequently, behavior has more than one function and you might have more than one hypothesis. So you might have more than one hypothesis that describes the range of behaviors that the student is showing or the range of situations that the behaviors are occurring in.

DON’T STOP TAKING DATA.

Now you don’t necessarily need to continue to take ABC data unless you really don’t know what your functions are. So if you haven’t been able to come up with a hypothesis statement, you need more data.

If you have a hypothesis statement, take that, make sure you’ve got solid baseline data of how often behaviors are occurring now. You can do that if you’ve been taken ABC data throughout the day. You can do that by adding up the incidents. Then look at taking something like frequency data or duration data to monitor your plan and we’ll talk about that in a future episode. But it’s important that we don’t stop taking the data just because we’ve developed our hypothesis.

So I will be back next week and I will talk more about designing behavior support plans and how we take this information and actually turn it into something that actually may change the behavior of the student in your classroom, which I know is the piece that all of you have been waiting for, but you have to have these pieces in place in order to get to that place. So that will be our next topic and I will give you some examples and we’ll kind of walk through how do you take this information and turn it into that.

If you would like to do a bigger deep dive into behavioral problem solving, I highly encourage you to check out the  Special Educator Academy . That is where you’ll find me. I’m available in our forums to answer questions, provide support and also our behavioral course has a wide variety of data sheets, strategies, videos and information about this entire process and hopefully pulls it all together. And then when there are questions about it, people can come to the community and ask them and we’re all working off of the same page.

You can find more information about the Special Educator Academy at  specialeducatoracademy.com  come try our free 7- day trial and see if it’s for you.

Thank you so much for spending this time with me. I really appreciate it. I hope that this has been helpful in giving you some ideas about formulating hypotheses for your students, and I hope to see you again in our next episode.

I hope that you’re enjoying the podcast and I’d love it if you’d  hop over to iTunes  and leave a review or and or subscribe a so that you will continue to get episodes.

Never Miss An Episode!

good behavior hypothesis

Unlock Unlimited Access to Our FREE Resource Library!

Welcome to an exclusive collection designed just for you!

Our library is packed with carefully curated printable resources and videos tailored to make your journey as a special educator or homeschooling family smoother and more productive.

Free Resource Library

  • Affiliate Program

Wordvice

  • UNITED STATES
  • 台灣 (TAIWAN)
  • TÜRKIYE (TURKEY)
  • Academic Editing Services
  • - Research Paper
  • - Journal Manuscript
  • - Dissertation
  • - College & University Assignments
  • Admissions Editing Services
  • - Application Essay
  • - Personal Statement
  • - Recommendation Letter
  • - Cover Letter
  • - CV/Resume
  • Business Editing Services
  • - Business Documents
  • - Report & Brochure
  • - Website & Blog
  • Writer Editing Services
  • - Script & Screenplay
  • Our Editors
  • Client Reviews
  • Editing & Proofreading Prices
  • Wordvice Points
  • Partner Discount
  • Plagiarism Checker
  • APA Citation Generator
  • MLA Citation Generator
  • Chicago Citation Generator
  • Vancouver Citation Generator
  • - APA Style
  • - MLA Style
  • - Chicago Style
  • - Vancouver Style
  • Writing & Editing Guide
  • Academic Resources
  • Admissions Resources

How to Write a Research Hypothesis: Good & Bad Examples

good behavior hypothesis

What is a research hypothesis?

A research hypothesis is an attempt at explaining a phenomenon or the relationships between phenomena/variables in the real world. Hypotheses are sometimes called “educated guesses”, but they are in fact (or let’s say they should be) based on previous observations, existing theories, scientific evidence, and logic. A research hypothesis is also not a prediction—rather, predictions are ( should be) based on clearly formulated hypotheses. For example, “We tested the hypothesis that KLF2 knockout mice would show deficiencies in heart development” is an assumption or prediction, not a hypothesis. 

The research hypothesis at the basis of this prediction is “the product of the KLF2 gene is involved in the development of the cardiovascular system in mice”—and this hypothesis is probably (hopefully) based on a clear observation, such as that mice with low levels of Kruppel-like factor 2 (which KLF2 codes for) seem to have heart problems. From this hypothesis, you can derive the idea that a mouse in which this particular gene does not function cannot develop a normal cardiovascular system, and then make the prediction that we started with. 

What is the difference between a hypothesis and a prediction?

You might think that these are very subtle differences, and you will certainly come across many publications that do not contain an actual hypothesis or do not make these distinctions correctly. But considering that the formulation and testing of hypotheses is an integral part of the scientific method, it is good to be aware of the concepts underlying this approach. The two hallmarks of a scientific hypothesis are falsifiability (an evaluation standard that was introduced by the philosopher of science Karl Popper in 1934) and testability —if you cannot use experiments or data to decide whether an idea is true or false, then it is not a hypothesis (or at least a very bad one).

So, in a nutshell, you (1) look at existing evidence/theories, (2) come up with a hypothesis, (3) make a prediction that allows you to (4) design an experiment or data analysis to test it, and (5) come to a conclusion. Of course, not all studies have hypotheses (there is also exploratory or hypothesis-generating research), and you do not necessarily have to state your hypothesis as such in your paper. 

But for the sake of understanding the principles of the scientific method, let’s first take a closer look at the different types of hypotheses that research articles refer to and then give you a step-by-step guide for how to formulate a strong hypothesis for your own paper.

Types of Research Hypotheses

Hypotheses can be simple , which means they describe the relationship between one single independent variable (the one you observe variations in or plan to manipulate) and one single dependent variable (the one you expect to be affected by the variations/manipulation). If there are more variables on either side, you are dealing with a complex hypothesis. You can also distinguish hypotheses according to the kind of relationship between the variables you are interested in (e.g., causal or associative ). But apart from these variations, we are usually interested in what is called the “alternative hypothesis” and, in contrast to that, the “null hypothesis”. If you think these two should be listed the other way round, then you are right, logically speaking—the alternative should surely come second. However, since this is the hypothesis we (as researchers) are usually interested in, let’s start from there.

Alternative Hypothesis

If you predict a relationship between two variables in your study, then the research hypothesis that you formulate to describe that relationship is your alternative hypothesis (usually H1 in statistical terms). The goal of your hypothesis testing is thus to demonstrate that there is sufficient evidence that supports the alternative hypothesis, rather than evidence for the possibility that there is no such relationship. The alternative hypothesis is usually the research hypothesis of a study and is based on the literature, previous observations, and widely known theories. 

Null Hypothesis

The hypothesis that describes the other possible outcome, that is, that your variables are not related, is the null hypothesis ( H0 ). Based on your findings, you choose between the two hypotheses—usually that means that if your prediction was correct, you reject the null hypothesis and accept the alternative. Make sure, however, that you are not getting lost at this step of the thinking process: If your prediction is that there will be no difference or change, then you are trying to find support for the null hypothesis and reject H1. 

Directional Hypothesis

While the null hypothesis is obviously “static”, the alternative hypothesis can specify a direction for the observed relationship between variables—for example, that mice with higher expression levels of a certain protein are more active than those with lower levels. This is then called a one-tailed hypothesis. 

Another example for a directional one-tailed alternative hypothesis would be that 

H1: Attending private classes before important exams has a positive effect on performance. 

Your null hypothesis would then be that

H0: Attending private classes before important exams has no/a negative effect on performance.

Nondirectional Hypothesis

A nondirectional hypothesis does not specify the direction of the potentially observed effect, only that there is a relationship between the studied variables—this is called a two-tailed hypothesis. For instance, if you are studying a new drug that has shown some effects on pathways involved in a certain condition (e.g., anxiety) in vitro in the lab, but you can’t say for sure whether it will have the same effects in an animal model or maybe induce other/side effects that you can’t predict and potentially increase anxiety levels instead, you could state the two hypotheses like this:

H1: The only lab-tested drug (somehow) affects anxiety levels in an anxiety mouse model.

You then test this nondirectional alternative hypothesis against the null hypothesis:

H0: The only lab-tested drug has no effect on anxiety levels in an anxiety mouse model.

hypothesis in a research paper

How to Write a Hypothesis for a Research Paper

Now that we understand the important distinctions between different kinds of research hypotheses, let’s look at a simple process of how to write a hypothesis.

Writing a Hypothesis Step:1

Ask a question, based on earlier research. Research always starts with a question, but one that takes into account what is already known about a topic or phenomenon. For example, if you are interested in whether people who have pets are happier than those who don’t, do a literature search and find out what has already been demonstrated. You will probably realize that yes, there is quite a bit of research that shows a relationship between happiness and owning a pet—and even studies that show that owning a dog is more beneficial than owning a cat ! Let’s say you are so intrigued by this finding that you wonder: 

What is it that makes dog owners even happier than cat owners? 

Let’s move on to Step 2 and find an answer to that question.

Writing a Hypothesis Step 2:

Formulate a strong hypothesis by answering your own question. Again, you don’t want to make things up, take unicorns into account, or repeat/ignore what has already been done. Looking at the dog-vs-cat papers your literature search returned, you see that most studies are based on self-report questionnaires on personality traits, mental health, and life satisfaction. What you don’t find is any data on actual (mental or physical) health measures, and no experiments. You therefore decide to make a bold claim come up with the carefully thought-through hypothesis that it’s maybe the lifestyle of the dog owners, which includes walking their dog several times per day, engaging in fun and healthy activities such as agility competitions, and taking them on trips, that gives them that extra boost in happiness. You could therefore answer your question in the following way:

Dog owners are happier than cat owners because of the dog-related activities they engage in.

Now you have to verify that your hypothesis fulfills the two requirements we introduced at the beginning of this resource article: falsifiability and testability . If it can’t be wrong and can’t be tested, it’s not a hypothesis. We are lucky, however, because yes, we can test whether owning a dog but not engaging in any of those activities leads to lower levels of happiness or well-being than owning a dog and playing and running around with them or taking them on trips.  

Writing a Hypothesis Step 3:

Make your predictions and define your variables. We have verified that we can test our hypothesis, but now we have to define all the relevant variables, design our experiment or data analysis, and make precise predictions. You could, for example, decide to study dog owners (not surprising at this point), let them fill in questionnaires about their lifestyle as well as their life satisfaction (as other studies did), and then compare two groups of active and inactive dog owners. Alternatively, if you want to go beyond the data that earlier studies produced and analyzed and directly manipulate the activity level of your dog owners to study the effect of that manipulation, you could invite them to your lab, select groups of participants with similar lifestyles, make them change their lifestyle (e.g., couch potato dog owners start agility classes, very active ones have to refrain from any fun activities for a certain period of time) and assess their happiness levels before and after the intervention. In both cases, your independent variable would be “ level of engagement in fun activities with dog” and your dependent variable would be happiness or well-being . 

Examples of a Good and Bad Hypothesis

Let’s look at a few examples of good and bad hypotheses to get you started.

Good Hypothesis Examples

Bad hypothesis examples, tips for writing a research hypothesis.

If you understood the distinction between a hypothesis and a prediction we made at the beginning of this article, then you will have no problem formulating your hypotheses and predictions correctly. To refresh your memory: We have to (1) look at existing evidence, (2) come up with a hypothesis, (3) make a prediction, and (4) design an experiment. For example, you could summarize your dog/happiness study like this:

(1) While research suggests that dog owners are happier than cat owners, there are no reports on what factors drive this difference. (2) We hypothesized that it is the fun activities that many dog owners (but very few cat owners) engage in with their pets that increases their happiness levels. (3) We thus predicted that preventing very active dog owners from engaging in such activities for some time and making very inactive dog owners take up such activities would lead to an increase and decrease in their overall self-ratings of happiness, respectively. (4) To test this, we invited dog owners into our lab, assessed their mental and emotional well-being through questionnaires, and then assigned them to an “active” and an “inactive” group, depending on… 

Note that you use “we hypothesize” only for your hypothesis, not for your experimental prediction, and “would” or “if – then” only for your prediction, not your hypothesis. A hypothesis that states that something “would” affect something else sounds as if you don’t have enough confidence to make a clear statement—in which case you can’t expect your readers to believe in your research either. Write in the present tense, don’t use modal verbs that express varying degrees of certainty (such as may, might, or could ), and remember that you are not drawing a conclusion while trying not to exaggerate but making a clear statement that you then, in a way, try to disprove . And if that happens, that is not something to fear but an important part of the scientific process.

Similarly, don’t use “we hypothesize” when you explain the implications of your research or make predictions in the conclusion section of your manuscript, since these are clearly not hypotheses in the true sense of the word. As we said earlier, you will find that many authors of academic articles do not seem to care too much about these rather subtle distinctions, but thinking very clearly about your own research will not only help you write better but also ensure that even that infamous Reviewer 2 will find fewer reasons to nitpick about your manuscript. 

Perfect Your Manuscript With Professional Editing

Now that you know how to write a strong research hypothesis for your research paper, you might be interested in our free AI proofreader , Wordvice AI, which finds and fixes errors in grammar, punctuation, and word choice in academic texts. Or if you are interested in human proofreading , check out our English editing services , including research paper editing and manuscript editing .

On the Wordvice academic resources website , you can also find many more articles and other resources that can help you with writing the other parts of your research paper , with making a research paper outline before you put everything together, or with writing an effective cover letter once you are ready to submit.

  • Learning Modules
  • About the Project
  • Project Resources

Functional Behavior Assessment

  • Overview of Functional Behavior Assessment
  • Step 1 Planning
  • Step 2.1 Collect baseline data using direct and indirect assessment methods
  • Step 2.2 Gather observation-based data on the occurrence of the interfering behavior
  • Step 2.3a Identify variables of the behavior

Step 2.3b Create a hypothesis statement for the purpose of the behavior

  • Step 2.3c Test the hypothesis (behavior) statement
  • Step 2.4 Develop a behavior intervention plan (BIP)
  • Practice Scenarios: Implementing FBA
  • Knowledge Check
  • Step 3 Monitoring Progress
  • Module Resources

Create a hypothesis (behavior) statement

A hypothesis statement should be based upon the assessment results and describes the best guess of the purpose of the behavior in sufficient detail. That is, what is the behavior trying to tell us? Analyzing assessment data helps  team members  identify patterns or behaviors across time and settings. Often times, patterns of behavior and the possible reasons for the behaviors will be obvious; however, at other times, the behavior patterns may be subtle and difficult to identify. When this occurs, additional data might need to be gathered to guide the development of a behavior statement. 

Team members develop a behavior statement for the interfering behavior that includes:

  • the setting events, immediate antecedents, and immediate consequences that surround the interfering behavior
  • a restatement and refinement of the description of the interfering behavior that is occurring
  • the purpose the behavior serves (i.e., get/obtain, escape/avoid)

Example hypothesis (behavior) statement:

“Tino falls onto the floor, screaming and crying, when asked to clean up his toys, and he is then taken to his room where his mom rocks him on the rocking chair to calm him down.”

  • Printer-friendly version

This project is a program of the Frank Porter Graham Child Development Institute at University of North Carolina at Chapel Hill .

Research Hypothesis In Psychology: Types, & Examples

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

A research hypothesis, in its plural form “hypotheses,” is a specific, testable prediction about the anticipated results of a study, established at its outset. It is a key component of the scientific method .

Hypotheses connect theory to data and guide the research process towards expanding scientific understanding

Some key points about hypotheses:

  • A hypothesis expresses an expected pattern or relationship. It connects the variables under investigation.
  • It is stated in clear, precise terms before any data collection or analysis occurs. This makes the hypothesis testable.
  • A hypothesis must be falsifiable. It should be possible, even if unlikely in practice, to collect data that disconfirms rather than supports the hypothesis.
  • Hypotheses guide research. Scientists design studies to explicitly evaluate hypotheses about how nature works.
  • For a hypothesis to be valid, it must be testable against empirical evidence. The evidence can then confirm or disprove the testable predictions.
  • Hypotheses are informed by background knowledge and observation, but go beyond what is already known to propose an explanation of how or why something occurs.
Predictions typically arise from a thorough knowledge of the research literature, curiosity about real-world problems or implications, and integrating this to advance theory. They build on existing literature while providing new insight.

Types of Research Hypotheses

Alternative hypothesis.

The research hypothesis is often called the alternative or experimental hypothesis in experimental research.

It typically suggests a potential relationship between two key variables: the independent variable, which the researcher manipulates, and the dependent variable, which is measured based on those changes.

The alternative hypothesis states a relationship exists between the two variables being studied (one variable affects the other).

A hypothesis is a testable statement or prediction about the relationship between two or more variables. It is a key component of the scientific method. Some key points about hypotheses:

  • Important hypotheses lead to predictions that can be tested empirically. The evidence can then confirm or disprove the testable predictions.

In summary, a hypothesis is a precise, testable statement of what researchers expect to happen in a study and why. Hypotheses connect theory to data and guide the research process towards expanding scientific understanding.

An experimental hypothesis predicts what change(s) will occur in the dependent variable when the independent variable is manipulated.

It states that the results are not due to chance and are significant in supporting the theory being investigated.

The alternative hypothesis can be directional, indicating a specific direction of the effect, or non-directional, suggesting a difference without specifying its nature. It’s what researchers aim to support or demonstrate through their study.

Null Hypothesis

The null hypothesis states no relationship exists between the two variables being studied (one variable does not affect the other). There will be no changes in the dependent variable due to manipulating the independent variable.

It states results are due to chance and are not significant in supporting the idea being investigated.

The null hypothesis, positing no effect or relationship, is a foundational contrast to the research hypothesis in scientific inquiry. It establishes a baseline for statistical testing, promoting objectivity by initiating research from a neutral stance.

Many statistical methods are tailored to test the null hypothesis, determining the likelihood of observed results if no true effect exists.

This dual-hypothesis approach provides clarity, ensuring that research intentions are explicit, and fosters consistency across scientific studies, enhancing the standardization and interpretability of research outcomes.

Nondirectional Hypothesis

A non-directional hypothesis, also known as a two-tailed hypothesis, predicts that there is a difference or relationship between two variables but does not specify the direction of this relationship.

It merely indicates that a change or effect will occur without predicting which group will have higher or lower values.

For example, “There is a difference in performance between Group A and Group B” is a non-directional hypothesis.

Directional Hypothesis

A directional (one-tailed) hypothesis predicts the nature of the effect of the independent variable on the dependent variable. It predicts in which direction the change will take place. (i.e., greater, smaller, less, more)

It specifies whether one variable is greater, lesser, or different from another, rather than just indicating that there’s a difference without specifying its nature.

For example, “Exercise increases weight loss” is a directional hypothesis.

hypothesis

Falsifiability

The Falsification Principle, proposed by Karl Popper , is a way of demarcating science from non-science. It suggests that for a theory or hypothesis to be considered scientific, it must be testable and irrefutable.

Falsifiability emphasizes that scientific claims shouldn’t just be confirmable but should also have the potential to be proven wrong.

It means that there should exist some potential evidence or experiment that could prove the proposition false.

However many confirming instances exist for a theory, it only takes one counter observation to falsify it. For example, the hypothesis that “all swans are white,” can be falsified by observing a black swan.

For Popper, science should attempt to disprove a theory rather than attempt to continually provide evidence to support a research hypothesis.

Can a Hypothesis be Proven?

Hypotheses make probabilistic predictions. They state the expected outcome if a particular relationship exists. However, a study result supporting a hypothesis does not definitively prove it is true.

All studies have limitations. There may be unknown confounding factors or issues that limit the certainty of conclusions. Additional studies may yield different results.

In science, hypotheses can realistically only be supported with some degree of confidence, not proven. The process of science is to incrementally accumulate evidence for and against hypothesized relationships in an ongoing pursuit of better models and explanations that best fit the empirical data. But hypotheses remain open to revision and rejection if that is where the evidence leads.
  • Disproving a hypothesis is definitive. Solid disconfirmatory evidence will falsify a hypothesis and require altering or discarding it based on the evidence.
  • However, confirming evidence is always open to revision. Other explanations may account for the same results, and additional or contradictory evidence may emerge over time.

We can never 100% prove the alternative hypothesis. Instead, we see if we can disprove, or reject the null hypothesis.

If we reject the null hypothesis, this doesn’t mean that our alternative hypothesis is correct but does support the alternative/experimental hypothesis.

Upon analysis of the results, an alternative hypothesis can be rejected or supported, but it can never be proven to be correct. We must avoid any reference to results proving a theory as this implies 100% certainty, and there is always a chance that evidence may exist which could refute a theory.

How to Write a Hypothesis

  • Identify variables . The researcher manipulates the independent variable and the dependent variable is the measured outcome.
  • Operationalized the variables being investigated . Operationalization of a hypothesis refers to the process of making the variables physically measurable or testable, e.g. if you are about to study aggression, you might count the number of punches given by participants.
  • Decide on a direction for your prediction . If there is evidence in the literature to support a specific effect of the independent variable on the dependent variable, write a directional (one-tailed) hypothesis. If there are limited or ambiguous findings in the literature regarding the effect of the independent variable on the dependent variable, write a non-directional (two-tailed) hypothesis.
  • Make it Testable : Ensure your hypothesis can be tested through experimentation or observation. It should be possible to prove it false (principle of falsifiability).
  • Clear & concise language . A strong hypothesis is concise (typically one to two sentences long), and formulated using clear and straightforward language, ensuring it’s easily understood and testable.

Consider a hypothesis many teachers might subscribe to: students work better on Monday morning than on Friday afternoon (IV=Day, DV= Standard of work).

Now, if we decide to study this by giving the same group of students a lesson on a Monday morning and a Friday afternoon and then measuring their immediate recall of the material covered in each session, we would end up with the following:

  • The alternative hypothesis states that students will recall significantly more information on a Monday morning than on a Friday afternoon.
  • The null hypothesis states that there will be no significant difference in the amount recalled on a Monday morning compared to a Friday afternoon. Any difference will be due to chance or confounding factors.

More Examples

  • Memory : Participants exposed to classical music during study sessions will recall more items from a list than those who studied in silence.
  • Social Psychology : Individuals who frequently engage in social media use will report higher levels of perceived social isolation compared to those who use it infrequently.
  • Developmental Psychology : Children who engage in regular imaginative play have better problem-solving skills than those who don’t.
  • Clinical Psychology : Cognitive-behavioral therapy will be more effective in reducing symptoms of anxiety over a 6-month period compared to traditional talk therapy.
  • Cognitive Psychology : Individuals who multitask between various electronic devices will have shorter attention spans on focused tasks than those who single-task.
  • Health Psychology : Patients who practice mindfulness meditation will experience lower levels of chronic pain compared to those who don’t meditate.
  • Organizational Psychology : Employees in open-plan offices will report higher levels of stress than those in private offices.
  • Behavioral Psychology : Rats rewarded with food after pressing a lever will press it more frequently than rats who receive no reward.

Print Friendly, PDF & Email

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • How to Write a Strong Hypothesis | Guide & Examples

How to Write a Strong Hypothesis | Guide & Examples

Published on 6 May 2022 by Shona McCombes .

A hypothesis is a statement that can be tested by scientific research. If you want to test a relationship between two or more variables, you need to write hypotheses before you start your experiment or data collection.

Table of contents

What is a hypothesis, developing a hypothesis (with example), hypothesis examples, frequently asked questions about writing hypotheses.

A hypothesis states your predictions about what your research will find. It is a tentative answer to your research question that has not yet been tested. For some research projects, you might have to write several hypotheses that address different aspects of your research question.

A hypothesis is not just a guess – it should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations, and statistical analysis of data).

Variables in hypotheses

Hypotheses propose a relationship between two or more variables . An independent variable is something the researcher changes or controls. A dependent variable is something the researcher observes and measures.

In this example, the independent variable is exposure to the sun – the assumed cause . The dependent variable is the level of happiness – the assumed effect .

Prevent plagiarism, run a free check.

Step 1: ask a question.

Writing a hypothesis begins with a research question that you want to answer. The question should be focused, specific, and researchable within the constraints of your project.

Step 2: Do some preliminary research

Your initial answer to the question should be based on what is already known about the topic. Look for theories and previous studies to help you form educated assumptions about what your research will find.

At this stage, you might construct a conceptual framework to identify which variables you will study and what you think the relationships are between them. Sometimes, you’ll have to operationalise more complex constructs.

Step 3: Formulate your hypothesis

Now you should have some idea of what you expect to find. Write your initial answer to the question in a clear, concise sentence.

Step 4: Refine your hypothesis

You need to make sure your hypothesis is specific and testable. There are various ways of phrasing a hypothesis, but all the terms you use should have clear definitions, and the hypothesis should contain:

  • The relevant variables
  • The specific group being studied
  • The predicted outcome of the experiment or analysis

Step 5: Phrase your hypothesis in three ways

To identify the variables, you can write a simple prediction in if … then form. The first part of the sentence states the independent variable and the second part states the dependent variable.

In academic research, hypotheses are more commonly phrased in terms of correlations or effects, where you directly state the predicted relationship between variables.

If you are comparing two groups, the hypothesis can state what difference you expect to find between them.

Step 6. Write a null hypothesis

If your research involves statistical hypothesis testing , you will also have to write a null hypothesis. The null hypothesis is the default position that there is no association between the variables. The null hypothesis is written as H 0 , while the alternative hypothesis is H 1 or H a .

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

A hypothesis is not just a guess. It should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations, and statistical analysis of data).

A research hypothesis is your proposed answer to your research question. The research hypothesis usually includes an explanation (‘ x affects y because …’).

A statistical hypothesis, on the other hand, is a mathematical statement about a population parameter. Statistical hypotheses always come in pairs: the null and alternative hypotheses. In a well-designed study , the statistical hypotheses correspond logically to the research hypothesis.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2022, May 06). How to Write a Strong Hypothesis | Guide & Examples. Scribbr. Retrieved 9 April 2024, from https://www.scribbr.co.uk/research-methods/hypothesis-writing/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, operationalisation | a guide with examples, pros & cons, what is a conceptual framework | tips & examples, a quick guide to experimental design | 5 steps & examples.

Logo for Pressbooks

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Overview of the Scientific Method

Learning Objectives

  • Distinguish between a theory and a hypothesis.
  • Discover how theories are used to generate hypotheses and how the results of studies can be used to further inform theories.
  • Understand the characteristics of a good hypothesis.

Theories and Hypotheses

Before describing how to develop a hypothesis, it is important to distinguish between a theory and a hypothesis. A  theory  is a coherent explanation or interpretation of one or more phenomena. Although theories can take a variety of forms, one thing they have in common is that they go beyond the phenomena they explain by including variables, structures, processes, functions, or organizing principles that have not been observed directly. Consider, for example, Zajonc’s theory of social facilitation and social inhibition (1965) [1] . He proposed that being watched by others while performing a task creates a general state of physiological arousal, which increases the likelihood of the dominant (most likely) response. So for highly practiced tasks, being watched increases the tendency to make correct responses, but for relatively unpracticed tasks, being watched increases the tendency to make incorrect responses. Notice that this theory—which has come to be called drive theory—provides an explanation of both social facilitation and social inhibition that goes beyond the phenomena themselves by including concepts such as “arousal” and “dominant response,” along with processes such as the effect of arousal on the dominant response.

Outside of science, referring to an idea as a theory often implies that it is untested—perhaps no more than a wild guess. In science, however, the term theory has no such implication. A theory is simply an explanation or interpretation of a set of phenomena. It can be untested, but it can also be extensively tested, well supported, and accepted as an accurate description of the world by the scientific community. The theory of evolution by natural selection, for example, is a theory because it is an explanation of the diversity of life on earth—not because it is untested or unsupported by scientific research. On the contrary, the evidence for this theory is overwhelmingly positive and nearly all scientists accept its basic assumptions as accurate. Similarly, the “germ theory” of disease is a theory because it is an explanation of the origin of various diseases, not because there is any doubt that many diseases are caused by microorganisms that infect the body.

A  hypothesis , on the other hand, is a specific prediction about a new phenomenon that should be observed if a particular theory is accurate. It is an explanation that relies on just a few key concepts. Hypotheses are often specific predictions about what will happen in a particular study. They are developed by considering existing evidence and using reasoning to infer what will happen in the specific context of interest. Hypotheses are often but not always derived from theories. So a hypothesis is often a prediction based on a theory but some hypotheses are a-theoretical and only after a set of observations have been made, is a theory developed. This is because theories are broad in nature and they explain larger bodies of data. So if our research question is really original then we may need to collect some data and make some observations before we can develop a broader theory.

Theories and hypotheses always have this  if-then  relationship. “ If   drive theory is correct,  then  cockroaches should run through a straight runway faster, and a branching runway more slowly, when other cockroaches are present.” Although hypotheses are usually expressed as statements, they can always be rephrased as questions. “Do cockroaches run through a straight runway faster when other cockroaches are present?” Thus deriving hypotheses from theories is an excellent way of generating interesting research questions.

But how do researchers derive hypotheses from theories? One way is to generate a research question using the techniques discussed in this chapter  and then ask whether any theory implies an answer to that question. For example, you might wonder whether expressive writing about positive experiences improves health as much as expressive writing about traumatic experiences. Although this  question  is an interesting one  on its own, you might then ask whether the habituation theory—the idea that expressive writing causes people to habituate to negative thoughts and feelings—implies an answer. In this case, it seems clear that if the habituation theory is correct, then expressive writing about positive experiences should not be effective because it would not cause people to habituate to negative thoughts and feelings. A second way to derive hypotheses from theories is to focus on some component of the theory that has not yet been directly observed. For example, a researcher could focus on the process of habituation—perhaps hypothesizing that people should show fewer signs of emotional distress with each new writing session.

Among the very best hypotheses are those that distinguish between competing theories. For example, Norbert Schwarz and his colleagues considered two theories of how people make judgments about themselves, such as how assertive they are (Schwarz et al., 1991) [2] . Both theories held that such judgments are based on relevant examples that people bring to mind. However, one theory was that people base their judgments on the  number  of examples they bring to mind and the other was that people base their judgments on how  easily  they bring those examples to mind. To test these theories, the researchers asked people to recall either six times when they were assertive (which is easy for most people) or 12 times (which is difficult for most people). Then they asked them to judge their own assertiveness. Note that the number-of-examples theory implies that people who recalled 12 examples should judge themselves to be more assertive because they recalled more examples, but the ease-of-examples theory implies that participants who recalled six examples should judge themselves as more assertive because recalling the examples was easier. Thus the two theories made opposite predictions so that only one of the predictions could be confirmed. The surprising result was that participants who recalled fewer examples judged themselves to be more assertive—providing particularly convincing evidence in favor of the ease-of-retrieval theory over the number-of-examples theory.

Theory Testing

The primary way that scientific researchers use theories is sometimes called the hypothetico-deductive method  (although this term is much more likely to be used by philosophers of science than by scientists themselves). Researchers begin with a set of phenomena and either construct a theory to explain or interpret them or choose an existing theory to work with. They then make a prediction about some new phenomenon that should be observed if the theory is correct. Again, this prediction is called a hypothesis. The researchers then conduct an empirical study to test the hypothesis. Finally, they reevaluate the theory in light of the new results and revise it if necessary. This process is usually conceptualized as a cycle because the researchers can then derive a new hypothesis from the revised theory, conduct a new empirical study to test the hypothesis, and so on. As  Figure 2.3  shows, this approach meshes nicely with the model of scientific research in psychology presented earlier in the textbook—creating a more detailed model of “theoretically motivated” or “theory-driven” research.

good behavior hypothesis

As an example, let us consider Zajonc’s research on social facilitation and inhibition. He started with a somewhat contradictory pattern of results from the research literature. He then constructed his drive theory, according to which being watched by others while performing a task causes physiological arousal, which increases an organism’s tendency to make the dominant response. This theory predicts social facilitation for well-learned tasks and social inhibition for poorly learned tasks. He now had a theory that organized previous results in a meaningful way—but he still needed to test it. He hypothesized that if his theory was correct, he should observe that the presence of others improves performance in a simple laboratory task but inhibits performance in a difficult version of the very same laboratory task. To test this hypothesis, one of the studies he conducted used cockroaches as subjects (Zajonc, Heingartner, & Herman, 1969) [3] . The cockroaches ran either down a straight runway (an easy task for a cockroach) or through a cross-shaped maze (a difficult task for a cockroach) to escape into a dark chamber when a light was shined on them. They did this either while alone or in the presence of other cockroaches in clear plastic “audience boxes.” Zajonc found that cockroaches in the straight runway reached their goal more quickly in the presence of other cockroaches, but cockroaches in the cross-shaped maze reached their goal more slowly when they were in the presence of other cockroaches. Thus he confirmed his hypothesis and provided support for his drive theory. (Zajonc also showed that drive theory existed in humans [Zajonc & Sales, 1966] [4] in many other studies afterward).

Incorporating Theory into Your Research

When you write your research report or plan your presentation, be aware that there are two basic ways that researchers usually include theory. The first is to raise a research question, answer that question by conducting a new study, and then offer one or more theories (usually more) to explain or interpret the results. This format works well for applied research questions and for research questions that existing theories do not address. The second way is to describe one or more existing theories, derive a hypothesis from one of those theories, test the hypothesis in a new study, and finally reevaluate the theory. This format works well when there is an existing theory that addresses the research question—especially if the resulting hypothesis is surprising or conflicts with a hypothesis derived from a different theory.

To use theories in your research will not only give you guidance in coming up with experiment ideas and possible projects, but it lends legitimacy to your work. Psychologists have been interested in a variety of human behaviors and have developed many theories along the way. Using established theories will help you break new ground as a researcher, not limit you from developing your own ideas.

Characteristics of a Good Hypothesis

There are three general characteristics of a good hypothesis. First, a good hypothesis must be testable and falsifiable . We must be able to test the hypothesis using the methods of science and if you’ll recall Popper’s falsifiability criterion, it must be possible to gather evidence that will disconfirm the hypothesis if it is indeed false. Second, a good hypothesis must be logical. As described above, hypotheses are more than just a random guess. Hypotheses should be informed by previous theories or observations and logical reasoning. Typically, we begin with a broad and general theory and use  deductive reasoning to generate a more specific hypothesis to test based on that theory. Occasionally, however, when there is no theory to inform our hypothesis, we use  inductive reasoning  which involves using specific observations or research findings to form a more general hypothesis. Finally, the hypothesis should be positive. That is, the hypothesis should make a positive statement about the existence of a relationship or effect, rather than a statement that a relationship or effect does not exist. As scientists, we don’t set out to show that relationships do not exist or that effects do not occur so our hypotheses should not be worded in a way to suggest that an effect or relationship does not exist. The nature of science is to assume that something does not exist and then seek to find evidence to prove this wrong, to show that it really does exist. That may seem backward to you but that is the nature of the scientific method. The underlying reason for this is beyond the scope of this chapter but it has to do with statistical theory.

  • Zajonc, R. B. (1965). Social facilitation.  Science, 149 , 269–274 ↵
  • Schwarz, N., Bless, H., Strack, F., Klumpp, G., Rittenauer-Schatka, H., & Simons, A. (1991). Ease of retrieval as information: Another look at the availability heuristic.  Journal of Personality and Social Psychology, 61 , 195–202. ↵
  • Zajonc, R. B., Heingartner, A., & Herman, E. M. (1969). Social enhancement and impairment of performance in the cockroach.  Journal of Personality and Social Psychology, 13 , 83–92. ↵
  • Zajonc, R.B. & Sales, S.M. (1966). Social facilitation of dominant and subordinate responses. Journal of Experimental Social Psychology, 2 , 160-168. ↵

A coherent explanation or interpretation of one or more phenomena.

A specific prediction about a new phenomenon that should be observed if a particular theory is accurate.

A cyclical process of theory development, starting with an observed phenomenon, then developing or using a theory to make a specific prediction of what should happen if that theory is correct, testing that prediction, refining the theory in light of the findings, and using that refined theory to develop new hypotheses, and so on.

The ability to test the hypothesis using the methods of science and the possibility to gather evidence that will disconfirm the hypothesis if it is indeed false.

Research Methods in Psychology Copyright © 2019 by Rajiv S. Jhangiani, I-Chant A. Chiang, Carrie Cuttler, & Dana C. Leighton is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

National Center for Pyramid Model Innovations

National Center for Pyramid Model Innovations

PBS Process

  • Teaching Tools

Step 1: Building a Behavior Support Team

Link to this accordion

PBS begins by developing a team of the key stakeholders or individuals who are most involved in the child’s life. This team should include the family and early educator, but also may include friends, other family members, therapists, and other instructional or administrative personnel. Team members collaborate in multiple ways in order to develop, implement, and monitor a child’s support plan.

When developing a behavior support team one must ask the four following questions:

  • Who  are the key stakeholders and individuals in this child’s life?
  • Why  is collaborative teaming a key element of PBS for this child?
  • What  do we need to do to make this a successful collaborative experience that will benefit the child and family?
  • How  are we going to promote the active participation of the family and all team members in the behavior support planning process?

Step 2: Person-Centered Planning

Person-centered planning provides a process for bringing the team together to discuss their vision and dreams for the child. Person-centered planning is a strength-based process that is a celebration of the child and a mechanism of establishing the commitment of the team members to supporting the child and family.

One of the key features of positive behavior support for young children with problem behavior and their families is a commitment to a collaborative team approach. This is especially important for children whose problem behavior occurs in multiple settings such as the home, preschool, therapy visits, etc.

In general, person centered planning processes use graphic recordings (usually words, pictures, and symbols on chart paper) and group facilitation techniques to guide the team through the process. For example, the facilitator is responsible for setting the agenda, assessing equal opportunities for all to participate, handling conflict when necessary, and maintaining the group’s focus. The following well-known person centered planning processes share underlying values and similarities but may differ in their application.

Step 3: Functional Behavioral Assessment

Functional assessment is a process for determining the function of the child’s problem behavior. Functional Assessment or Functional Behavioral Assessment (FBA) involves the collection of data, observations, and information to develop a clear understanding of the relationship of events and circumstances that trigger and maintain problem behavior.

Functional behavioral assessment (FBA) is a process used to develop an understanding of a child’s challenging behavior (Carr et al., 1994; O’Neill et al., 1997; Hieneman et al., 1999). The goal of functional behavioral assessment is to identify the function of the child’s behavior—the reason or purpose why a child behaves as he/she does in specific situations. The process involves collecting information through the use of direct observations, interviews, record reviews (e.g., school and/or medical records, lesson plans, individualized education plans), and behavior rating scales. This information is used to understand patterns of the child’s challenging behavior—the ecological events or conditions that increase the likelihood of challenging behavior (i.e., setting events), what happens before the behavior occurs (i.e., triggers or antecedents), what the behavior looks like (i.e., the behavior), and what happens after the challenging behavior occurs (i.e., consequences). Once collected, the information is analyzed to determine the specific function or purpose of the challenging behavior—whether it occurs in order for the child to obtain something (e.g., attention, object, activity) or to escape something (e.g., demands, activities, social interactions) (Carr et al., 1994; O’Neill et al., 1997). The process is complete when there is enough information that will lead to the development of hypotheses or summary statements (Hieneman et al., 1999) that represent the behavior support team’s best guess or prediction as to what conditions reliably predict the occurrence of the child’s challenging behavior.

Step 4: Hypothesis Development

The functional assessment process is completed with the development of a behavior hypothesis statement. The behavior hypothesis statements summarize what is known about triggers, behaviors, and maintaining consequences and offers an informed guess about the purpose of the problem behavior. Once a functional assessment is complete, the next step is to develop a hypothesis statement—a prediction or “best guess” of the function or reason a child’s challenging behavior occurs. This includes a description of the child’s challenging behavior (i.e., what the behavior looks like), information about the specific predictors or triggers that occurred before the child exhibited challenging behavior, the perceived purpose or function of the child’s behavior, as well as the maintaining consequences that followed. Predictors include both what conditions immediately precede the child’s behavior, as well as any setting events that may be presumed to increase the likelihood of the challenging behavior’s occurrence (e.g., lack of sleep, allergies/illnesses, social and interactional factors). Hypothesis development is a critically important step toward developing interventions that are directly linked to the function of the child’s challenging behavior (O’Neill et al., 1997).

Very young children have brief learning histories (Dunlap & Fox, 1996). In many cases, those with a limited repertoire of behavior will often use one behavior for several different purposes. For example, children often use a general tantrum (prolonged screaming, crying, pulling away) for multiple functions (e.g., request object and escape transition). Therefore, when sorting out hypotheses the support team should address all of the circumstances in which challenging behavior occurs rather than trying to match an individual function to each challenging behavior.

Once the behavior support team identifies its hypotheses, attention should be paid to the way by which hypotheses are written. They should be carefully written either as a series of sentences that include each component (e.g., description, predictors, purpose, maintaining consequences), or as a “when…then” or “if…then” statement (Hieneman et al., 1999). Remember the more clearly articulated the hypothesis, the more likely the hypothesis will clearly communicate the team’s understanding of the child’s challenging behavior.

Step 5: Behavior Support Plan Development

Once behavior hypotheses statements are developed to summarize the data gathered from the functional assessment process, the team can develop a behavior support plan. Essential components of the behavior support plan are prevention strategies, the instruction of replacement skills, new ways to respond to problem behavior, and lifestyle outcome goals.

The behavior support plan represents the culmination of the assessment process. Typically developed in connection with person-centered planning, the behavior support plan is the team’s action plan outlining the specific steps to be used to promote the child’s success and participation in daily activities and routines. In order to be most effective, behavior support plans should be both carefully developed and clearly written using plain language, incorporate the values of the family and support team, identify any prerequisite resources and training needs for implementation, and include individual components that are both easy to use and easy to remember.

Behavior support plans must contain the following components:

  • Behavior Hypothesis Statements –  Statements that include a description of the behavior, triggers or antecedents for the behavior, maintaining consequences, and the purpose of the problem behavior.
  • Prevention Strategies –  Strategies that may be used to reduce the likelihood that the child will have problem behavior. These may include environmental arrangements, personal support, changes in activities, new ways to prompt a child, changes in expectations, etc.
  • Replacement Skills –  Skills to teach that will replace the problem behavior.
  • Consequence Strategies –  Guidelines for how the adults will respond to problem behaviors in ways that will not maintain the behavior. In addition, this part of the plan may include positive reinforcement strategies for promoting the child’s use of new skills or appropriate behavior (this may also be included in prevention strategies)
  • Long Term Strategies –  This section of the plan may include long-term goals that will assist the child and family in meeting their vision of the child (e.g., develop friends, attend a community preschool program).

Step 6: Monitoring Outcomes

The effectiveness of the behavior support plan must be monitored. This monitoring includes measurement of changes in problem behavior and the achievement of new skills and lifestyle outcomes.

Once the child’s behavior support plan is developed, the behavior support team’s role is both to implement the plan itself and to monitor progress toward outcomes valued by the child’s family. The keys to successful outcomes are frequent data collection and consistency—relative not only to both when, where, and who implements the plan but also to how the plan is implemented (i.e., whether or not the same intervention steps are followed). Data collection (e.g., direct measurement and indirect measurement) should occur to document whether the plan is implemented with consistency and is effective in achieving the identified goals, as well as whether or not the replacement skills are durable over time (maintenance) and/or across settings/contexts (generalization). Data should be both easy to collect (e.g., rating scales, check sheets) and should be periodically reviewed by the behavior support team to ensure communication, make any adjustments as needed, and review progress relative to the long-term vision of the child and his/her family.

good behavior hypothesis

This website was made possible by Cooperative Agreement #H326B220002 which is funded by the U.S. Department of Education, Office of Special Education Programs. However, those contents do not necessarily represent the policy of the Department of Education, and you should not assume endorsement by the Federal Government. This website is maintained by the  University of South Florida . Contact  webmaster . © University of South Florida

  • Privacy Policy

Buy Me a Coffee

Research Method

Home » What is a Hypothesis – Types, Examples and Writing Guide

What is a Hypothesis – Types, Examples and Writing Guide

Table of Contents

What is a Hypothesis

Definition:

Hypothesis is an educated guess or proposed explanation for a phenomenon, based on some initial observations or data. It is a tentative statement that can be tested and potentially proven or disproven through further investigation and experimentation.

Hypothesis is often used in scientific research to guide the design of experiments and the collection and analysis of data. It is an essential element of the scientific method, as it allows researchers to make predictions about the outcome of their experiments and to test those predictions to determine their accuracy.

Types of Hypothesis

Types of Hypothesis are as follows:

Research Hypothesis

A research hypothesis is a statement that predicts a relationship between variables. It is usually formulated as a specific statement that can be tested through research, and it is often used in scientific research to guide the design of experiments.

Null Hypothesis

The null hypothesis is a statement that assumes there is no significant difference or relationship between variables. It is often used as a starting point for testing the research hypothesis, and if the results of the study reject the null hypothesis, it suggests that there is a significant difference or relationship between variables.

Alternative Hypothesis

An alternative hypothesis is a statement that assumes there is a significant difference or relationship between variables. It is often used as an alternative to the null hypothesis and is tested against the null hypothesis to determine which statement is more accurate.

Directional Hypothesis

A directional hypothesis is a statement that predicts the direction of the relationship between variables. For example, a researcher might predict that increasing the amount of exercise will result in a decrease in body weight.

Non-directional Hypothesis

A non-directional hypothesis is a statement that predicts the relationship between variables but does not specify the direction. For example, a researcher might predict that there is a relationship between the amount of exercise and body weight, but they do not specify whether increasing or decreasing exercise will affect body weight.

Statistical Hypothesis

A statistical hypothesis is a statement that assumes a particular statistical model or distribution for the data. It is often used in statistical analysis to test the significance of a particular result.

Composite Hypothesis

A composite hypothesis is a statement that assumes more than one condition or outcome. It can be divided into several sub-hypotheses, each of which represents a different possible outcome.

Empirical Hypothesis

An empirical hypothesis is a statement that is based on observed phenomena or data. It is often used in scientific research to develop theories or models that explain the observed phenomena.

Simple Hypothesis

A simple hypothesis is a statement that assumes only one outcome or condition. It is often used in scientific research to test a single variable or factor.

Complex Hypothesis

A complex hypothesis is a statement that assumes multiple outcomes or conditions. It is often used in scientific research to test the effects of multiple variables or factors on a particular outcome.

Applications of Hypothesis

Hypotheses are used in various fields to guide research and make predictions about the outcomes of experiments or observations. Here are some examples of how hypotheses are applied in different fields:

  • Science : In scientific research, hypotheses are used to test the validity of theories and models that explain natural phenomena. For example, a hypothesis might be formulated to test the effects of a particular variable on a natural system, such as the effects of climate change on an ecosystem.
  • Medicine : In medical research, hypotheses are used to test the effectiveness of treatments and therapies for specific conditions. For example, a hypothesis might be formulated to test the effects of a new drug on a particular disease.
  • Psychology : In psychology, hypotheses are used to test theories and models of human behavior and cognition. For example, a hypothesis might be formulated to test the effects of a particular stimulus on the brain or behavior.
  • Sociology : In sociology, hypotheses are used to test theories and models of social phenomena, such as the effects of social structures or institutions on human behavior. For example, a hypothesis might be formulated to test the effects of income inequality on crime rates.
  • Business : In business research, hypotheses are used to test the validity of theories and models that explain business phenomena, such as consumer behavior or market trends. For example, a hypothesis might be formulated to test the effects of a new marketing campaign on consumer buying behavior.
  • Engineering : In engineering, hypotheses are used to test the effectiveness of new technologies or designs. For example, a hypothesis might be formulated to test the efficiency of a new solar panel design.

How to write a Hypothesis

Here are the steps to follow when writing a hypothesis:

Identify the Research Question

The first step is to identify the research question that you want to answer through your study. This question should be clear, specific, and focused. It should be something that can be investigated empirically and that has some relevance or significance in the field.

Conduct a Literature Review

Before writing your hypothesis, it’s essential to conduct a thorough literature review to understand what is already known about the topic. This will help you to identify the research gap and formulate a hypothesis that builds on existing knowledge.

Determine the Variables

The next step is to identify the variables involved in the research question. A variable is any characteristic or factor that can vary or change. There are two types of variables: independent and dependent. The independent variable is the one that is manipulated or changed by the researcher, while the dependent variable is the one that is measured or observed as a result of the independent variable.

Formulate the Hypothesis

Based on the research question and the variables involved, you can now formulate your hypothesis. A hypothesis should be a clear and concise statement that predicts the relationship between the variables. It should be testable through empirical research and based on existing theory or evidence.

Write the Null Hypothesis

The null hypothesis is the opposite of the alternative hypothesis, which is the hypothesis that you are testing. The null hypothesis states that there is no significant difference or relationship between the variables. It is important to write the null hypothesis because it allows you to compare your results with what would be expected by chance.

Refine the Hypothesis

After formulating the hypothesis, it’s important to refine it and make it more precise. This may involve clarifying the variables, specifying the direction of the relationship, or making the hypothesis more testable.

Examples of Hypothesis

Here are a few examples of hypotheses in different fields:

  • Psychology : “Increased exposure to violent video games leads to increased aggressive behavior in adolescents.”
  • Biology : “Higher levels of carbon dioxide in the atmosphere will lead to increased plant growth.”
  • Sociology : “Individuals who grow up in households with higher socioeconomic status will have higher levels of education and income as adults.”
  • Education : “Implementing a new teaching method will result in higher student achievement scores.”
  • Marketing : “Customers who receive a personalized email will be more likely to make a purchase than those who receive a generic email.”
  • Physics : “An increase in temperature will cause an increase in the volume of a gas, assuming all other variables remain constant.”
  • Medicine : “Consuming a diet high in saturated fats will increase the risk of developing heart disease.”

Purpose of Hypothesis

The purpose of a hypothesis is to provide a testable explanation for an observed phenomenon or a prediction of a future outcome based on existing knowledge or theories. A hypothesis is an essential part of the scientific method and helps to guide the research process by providing a clear focus for investigation. It enables scientists to design experiments or studies to gather evidence and data that can support or refute the proposed explanation or prediction.

The formulation of a hypothesis is based on existing knowledge, observations, and theories, and it should be specific, testable, and falsifiable. A specific hypothesis helps to define the research question, which is important in the research process as it guides the selection of an appropriate research design and methodology. Testability of the hypothesis means that it can be proven or disproven through empirical data collection and analysis. Falsifiability means that the hypothesis should be formulated in such a way that it can be proven wrong if it is incorrect.

In addition to guiding the research process, the testing of hypotheses can lead to new discoveries and advancements in scientific knowledge. When a hypothesis is supported by the data, it can be used to develop new theories or models to explain the observed phenomenon. When a hypothesis is not supported by the data, it can help to refine existing theories or prompt the development of new hypotheses to explain the phenomenon.

When to use Hypothesis

Here are some common situations in which hypotheses are used:

  • In scientific research , hypotheses are used to guide the design of experiments and to help researchers make predictions about the outcomes of those experiments.
  • In social science research , hypotheses are used to test theories about human behavior, social relationships, and other phenomena.
  • I n business , hypotheses can be used to guide decisions about marketing, product development, and other areas. For example, a hypothesis might be that a new product will sell well in a particular market, and this hypothesis can be tested through market research.

Characteristics of Hypothesis

Here are some common characteristics of a hypothesis:

  • Testable : A hypothesis must be able to be tested through observation or experimentation. This means that it must be possible to collect data that will either support or refute the hypothesis.
  • Falsifiable : A hypothesis must be able to be proven false if it is not supported by the data. If a hypothesis cannot be falsified, then it is not a scientific hypothesis.
  • Clear and concise : A hypothesis should be stated in a clear and concise manner so that it can be easily understood and tested.
  • Based on existing knowledge : A hypothesis should be based on existing knowledge and research in the field. It should not be based on personal beliefs or opinions.
  • Specific : A hypothesis should be specific in terms of the variables being tested and the predicted outcome. This will help to ensure that the research is focused and well-designed.
  • Tentative: A hypothesis is a tentative statement or assumption that requires further testing and evidence to be confirmed or refuted. It is not a final conclusion or assertion.
  • Relevant : A hypothesis should be relevant to the research question or problem being studied. It should address a gap in knowledge or provide a new perspective on the issue.

Advantages of Hypothesis

Hypotheses have several advantages in scientific research and experimentation:

  • Guides research: A hypothesis provides a clear and specific direction for research. It helps to focus the research question, select appropriate methods and variables, and interpret the results.
  • Predictive powe r: A hypothesis makes predictions about the outcome of research, which can be tested through experimentation. This allows researchers to evaluate the validity of the hypothesis and make new discoveries.
  • Facilitates communication: A hypothesis provides a common language and framework for scientists to communicate with one another about their research. This helps to facilitate the exchange of ideas and promotes collaboration.
  • Efficient use of resources: A hypothesis helps researchers to use their time, resources, and funding efficiently by directing them towards specific research questions and methods that are most likely to yield results.
  • Provides a basis for further research: A hypothesis that is supported by data provides a basis for further research and exploration. It can lead to new hypotheses, theories, and discoveries.
  • Increases objectivity: A hypothesis can help to increase objectivity in research by providing a clear and specific framework for testing and interpreting results. This can reduce bias and increase the reliability of research findings.

Limitations of Hypothesis

Some Limitations of the Hypothesis are as follows:

  • Limited to observable phenomena: Hypotheses are limited to observable phenomena and cannot account for unobservable or intangible factors. This means that some research questions may not be amenable to hypothesis testing.
  • May be inaccurate or incomplete: Hypotheses are based on existing knowledge and research, which may be incomplete or inaccurate. This can lead to flawed hypotheses and erroneous conclusions.
  • May be biased: Hypotheses may be biased by the researcher’s own beliefs, values, or assumptions. This can lead to selective interpretation of data and a lack of objectivity in research.
  • Cannot prove causation: A hypothesis can only show a correlation between variables, but it cannot prove causation. This requires further experimentation and analysis.
  • Limited to specific contexts: Hypotheses are limited to specific contexts and may not be generalizable to other situations or populations. This means that results may not be applicable in other contexts or may require further testing.
  • May be affected by chance : Hypotheses may be affected by chance or random variation, which can obscure or distort the true relationship between variables.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Data collection

Data Collection – Methods Types and Examples

Delimitations

Delimitations in Research – Types, Examples and...

Research Process

Research Process – Steps, Examples and Tips

Research Design

Research Design – Types, Methods and Examples

Institutional Review Board (IRB)

Institutional Review Board – Application Sample...

Evaluating Research

Evaluating Research – Process, Examples and...

Examples

Psychology Hypothesis

good behavior hypothesis

Delving into the realm of human behavior and cognition, Psychology Hypothesis Statement Examples illuminate the intricate workings of the mind. These thesis statement examples span various psychological phenomena, offering insights into crafting hypotheses that drive impactful research. From personality traits to cognitive processes, explore the guide to formulate precise and insightful psychology hypothesis statements that shed light on the complexities of human psychology.

What is the Psychology Hypothesis?

In psychology, a good hypothesis is a tentative statement or educated guess that proposes a potential relationship between variables. It serves as a foundation for research, guiding the investigation into specific psychological phenomena or behaviors. A well-constructed psychology hypothesis outlines the expected outcome of the study and provides a framework for data collection and analysis.

Example of a Psychology Hypothesis Statement :

Research Question: Does exposure to nature improve individuals’ mood and well-being?

Hypothesis Statement: “Individuals who spend more time in natural environments will report higher levels of positive mood and overall well-being compared to those who spend less time outdoors.”

In this example, the psychology hypothesis predicts a positive relationship between exposure to nature and improved mood and well-being. The statement sets the direction for the study and provides a clear basis for data collection and analysis.

100 Psychology Hypothesis Statement Examples

Psychology Hypothesis Statement Examples

Size: 202 KB

Psychology Hypothesis Statement Examples encompass a diverse range of human behaviors and mental processes. Dive into the complexities of the human mind with Simple hypothesis that explore relationships, patterns, and influences on behavior. From memory recall to social interactions, these examples offer insights into crafting precise and impactful psychology hypotheses that drive meaningful research.

  • Effect of Color on Mood : Exposure to blue hues elevates mood in individuals.
  • Social Media and Self-Esteem : Higher social media usage correlates with lower self-esteem levels.
  • Sleep Quality and Cognitive Performance : Improved sleep quality enhances cognitive performance.
  • Personality Traits and Leadership : Extroverted individuals are more likely to assume leadership roles.
  • Parent-Child Attachment and Behavior : Strong parent-child attachment fosters positive behavior in children.
  • Cognitive Load and Decision Making : Increased cognitive load leads to poorer decision-making abilities.
  • Mindfulness Meditation and Stress Reduction : Regular mindfulness practice reduces stress levels.
  • Empathy and Altruistic Behavior : Higher empathy levels predict increased altruistic actions.
  • Positive Reinforcement and Learning : Positive reinforcement enhances learning outcomes in children.
  • Attachment Style and Romantic Relationships : Securely attached individuals experience more satisfying romantic relationships.
  • Body Image and Media Exposure : Greater exposure to idealized body images leads to negative body image perceptions.
  • Anxiety Levels and Academic Performance : Higher anxiety levels negatively impact academic achievement.
  • Parenting Style and Aggression : Authoritarian parenting style correlates with higher aggression in children.
  • Cognitive Aging and Memory Recall : Older adults experience reduced memory recall compared to younger individuals.
  • Peer Pressure and Risky Behavior : Peer pressure increases engagement in risky behaviors among adolescents.
  • Emotional Intelligence and Relationship Satisfaction : High emotional intelligence leads to greater relationship satisfaction.
  • Attachment Style and Coping Mechanisms : Insecure attachment is linked to maladaptive coping strategies.
  • Perceived Control and Stress Resilience : Higher perceived control buffers against the negative effects of stress.
  • Social Comparison and Self-Esteem : Frequent social comparison diminishes self-esteem levels.
  • Gender Stereotypes and Career Aspirations : Gender stereotypes influence career aspirations of young adults.
  • Technology Usage and Social Isolation : Increased technology usage contributes to feelings of social isolation.
  • Empathy and Conflict Resolution : Higher empathy levels facilitate effective conflict resolution.
  • Parental Influence and Academic Motivation : Parental involvement positively impacts student academic motivation.
  • Attention Deficit Hyperactivity Disorder (ADHD) and Video Games : Children with ADHD show increased hyperactivity after playing video games.
  • Positive Psychology Interventions and Well-being : Engaging in positive psychology interventions enhances overall well-being.
  • Social Support and Mental Health : Adequate social support leads to better mental health outcomes.
  • Parent-Child Communication and Risky Behavior : Open parent-child communication reduces engagement in risky behaviors.
  • Social Media and Body Dissatisfaction : Extensive social media use is linked to increased body dissatisfaction.
  • Personality Traits and Coping Strategies : Different personality traits influence varied coping mechanisms.
  • Peer Influence and Substance Abuse : Peer influence contributes to higher rates of substance abuse among adolescents.
  • Attentional Bias and Anxiety : Individuals with attentional bias are more prone to experiencing anxiety.
  • Attachment Style and Romantic Jealousy : Insecure attachment predicts higher levels of romantic jealousy.
  • Emotion Regulation and Well-being : Effective emotion regulation leads to greater overall well-being.
  • Parenting Styles and Academic Resilience : Supportive parenting styles enhance academic resilience in children.
  • Cultural Identity and Self-Esteem : Strong cultural identity is linked to higher self-esteem among minority individuals.
  • Working Memory and Problem-Solving : Better working memory capacity improves problem-solving abilities.
  • Fear Conditioning and Phobias : Fear conditioning contributes to the development of specific phobias.
  • Empathy and Prosocial Behavior : Higher empathy levels result in increased prosocial behaviors.
  • Social Anxiety and Online Communication : Individuals with social anxiety prefer online communication over face-to-face interactions.
  • Cognitive Biases and Decision-Making Errors : Cognitive biases lead to errors in judgment and decision-making.
  • Attachment Style and Romantic Attachment Patterns : Attachment style influences the development of romantic attachment patterns.
  • Self-Efficacy and Goal Achievement : Higher self-efficacy predicts greater success in achieving personal goals.
  • Stress Levels and Immune System Functioning : Elevated stress levels impair immune system functioning.
  • Social Media Use and Loneliness : Excessive social media use is associated with increased feelings of loneliness.
  • Emotion Recognition and Social Interaction : Improved emotion recognition skills enhance positive social interactions.
  • Perceived Control and Psychological Resilience : Strong perceived control fosters psychological resilience in adverse situations.
  • Narcissism and Online Self-Presentation : Narcissistic individuals engage in heightened self-promotion on social media.
  • Fear of Failure and Performance Anxiety : Fear of failure contributes to performance anxiety in high-pressure situations.
  • Gratitude Practice and Well-being : Regular gratitude practice leads to improved overall well-being.
  • Cultural Norms and Communication Styles : Cultural norms shape distinct communication styles among different groups.
  • Gender Identity and Mental Health : The alignment between gender identity and assigned sex at birth affects mental health outcomes.
  • Social Influence and Conformity : Social influence leads to increased conformity in group settings.
  • Parenting Styles and Attachment Security : Parenting styles influence the development of secure or insecure attachment in children.
  • Perceived Discrimination and Psychological Distress : Perceived discrimination is associated with higher levels of psychological distress.
  • Emotional Regulation Strategies and Impulse Control : Effective emotional regulation strategies enhance impulse control.
  • Cognitive Dissonance and Attitude Change : Cognitive dissonance prompts individuals to change attitudes to reduce discomfort.
  • Prejudice and Stereotype Formation : Exposure to prejudiced attitudes contributes to the formation of stereotypes.
  • Motivation and Goal Setting : High intrinsic motivation leads to more effective goal setting and achievement.
  • Coping Mechanisms and Trauma Recovery : Adaptive coping mechanisms facilitate better trauma recovery outcomes.
  • Personality Traits and Perceived Stress : Certain personality traits influence how individuals perceive and respond to stress.
  • Cognitive Biases and Decision-Making Strategies : Cognitive biases impact the strategies individuals use in decision-making.
  • Emotional Intelligence and Interpersonal Relationships : High emotional intelligence fosters healthier and more fulfilling interpersonal relationships.
  • Sensory Perception and Memory Formation : The accuracy of sensory perception influences the formation of memories.
  • Parental Influences and Peer Relationships : Parental attitudes shape the quality of adolescents’ peer relationships.
  • Social Comparison and Body Image : Frequent social comparison contributes to negative body image perceptions.
  • Attention Deficit Hyperactivity Disorder (ADHD) and Academic Achievement : Children with ADHD face challenges in achieving academic success.
  • Cultural Identity and Mental Health Stigma : Strong cultural identity buffers against the negative effects of mental health stigma.
  • Self-Esteem and Risk-Taking Behavior : Individuals with high self-esteem are more likely to engage in risk-taking behaviors.
  • Resilience and Adversity Coping : High resilience levels enhance individuals’ ability to cope with adversity.
  • Motivation and Learning Styles : Different types of motivation influence preferred learning styles.
  • Body Language and Nonverbal Communication : Body language cues play a significant role in nonverbal communication effectiveness.
  • Social Identity and Intergroup Bias : Strong identification with a social group contributes to intergroup bias.
  • Mindfulness Practice and Anxiety Reduction : Regular mindfulness practice leads to decreased levels of anxiety.
  • Attachment Style and Romantic Satisfaction : Attachment style influences satisfaction levels in romantic relationships.
  • Intrinsic vs. Extrinsic Motivation : Intrinsic motivation yields more sustainable outcomes than extrinsic motivation.
  • Attention Allocation and Multitasking Performance : Efficient attention allocation enhances multitasking performance.
  • Neuroplasticity and Skill Acquisition : Neuroplasticity supports the acquisition and refinement of new skills.
  • Prejudice Reduction Interventions and Attitude Change : Prejudice reduction interventions lead to positive attitude changes.
  • Parental Support and Adolescent Resilience : Strong parental support enhances resilience in adolescents facing challenges.
  • Social Media Use and FOMO (Fear of Missing Out) : Extensive social media use contributes to higher levels of FOMO.
  • Mood and Decision-Making Biases : Different mood states influence cognitive biases in decision-making.
  • Parental Attachment and Peer Influence : Strong parental attachment moderates the impact of peer influence on adolescents.
  • Personality Traits and Job Satisfaction : Certain personality traits predict higher job satisfaction levels.
  • Social Support and Post-Traumatic Growth : Adequate social support fosters post-traumatic growth after adversity.
  • Cognitive Load and Creativity : High cognitive load impedes creative thinking and problem-solving.
  • Self-Efficacy and Goal Persistence : Higher self-efficacy leads to increased persistence in achieving goals.
  • Stress and Physical Health : Chronic stress negatively affects physical health outcomes.
  • Perceived Control and Psychological Well-being : Strong perceived control is linked to greater psychological well-being.
  • Parenting Styles and Emotional Regulation in Children : Authoritative parenting styles promote effective emotional regulation.
  • Cultural Exposure and Empathy Levels : Exposure to diverse cultures enhances empathetic understanding.
  • Emotional Intelligence and Conflict Resolution : High emotional intelligence leads to more effective conflict resolution strategies.
  • Personality Traits and Leadership Styles : Different personality traits align with distinct leadership approaches.
  • Attachment Style and Romantic Relationship Quality : Secure attachment predicts higher quality romantic relationships.
  • Social Comparison and Self-Perception : Frequent social comparison impacts individuals’ self-perception and self-esteem.
  • Mindfulness Meditation and Stress Resilience : Regular mindfulness practice enhances resilience in the face of stress.
  • Cognitive Biases and Prejudice Formation : Cognitive biases contribute to the formation and reinforcement of prejudices.
  • Parenting Styles and Social Skills Development : Authoritative parenting styles foster positive social skills in children.
  • Emotion Regulation Strategies and Mental Health : Effective emotion regulation strategies contribute to better mental health outcomes.
  • Self-Esteem and Academic Achievement : Higher self-esteem correlates with improved academic performance.
  • Cultural Identity and Intergroup Bias : Strong cultural identity buffers against the effects of intergroup bias.

Psychology Hypothesis Statement Examples for Social Experiments & Studies : Dive into social dynamics with hypotheses that explore human behavior in various contexts. These examples delve into the intricate interplay of psychological factors in social experiments and studies, shedding light on how individuals interact, perceive, and respond within social environments. You may also be interested in our two tailed hypothesis .

  • Influence of Group Size on Conformity : Larger group sizes lead to higher levels of conformity in social experiments.
  • Effects of Positive Reinforcement on Prosocial Behavior : Positive reinforcement increases the likelihood of engaging in prosocial actions.
  • Role of Normative Social Influence in Decision Making : Normative social influence influences decision-making processes in group settings.
  • Impact of Obedience to Authority on Ethical Decision Making : Obedience to authority influences ethical decision-making tendencies.
  • Attribution Bias in Social Interactions : Attribution bias leads individuals to attribute their successes to internal factors and failures to external factors.
  • Social Comparison and Body Dissatisfaction : Frequent social comparison contributes to negative body image perceptions.
  • Perceived Control and Social Stress Resilience : Strong perceived control mitigates the negative effects of social stress.
  • Impression Management in Online Social Networks : Individuals engage in impression management to create a favorable online image.
  • Social Identity and Group Behavior : Strong social identity fosters a sense of belonging and influences group behavior.
  • Altruistic Behavior and Empathy Levels : Higher empathy levels correlate with increased engagement in altruistic actions.

Social Psychology Hypothesis Statement Examples : Explore the intricacies of human behavior within social contexts through these social psychology hypotheses. These examples delve into the dynamics of social interactions, group dynamics, and the psychological factors that influence how individuals perceive and respond to the social world.

  • Social Norms and Conformity : Individuals conform to social norms to gain social acceptance and avoid rejection.
  • Bystander Effect and Helping Behavior : The bystander effect decreases the likelihood of individuals offering help in emergency situations.
  • In-Group Bias and Intergroup Relations : In-group bias leads to favoritism toward members of one’s own social group.
  • Social Influence and Decision Making : Social influence impacts decision-making processes in group settings.
  • Deindividuation and Uninhibited Behavior : Deindividuation leads to reduced self-awareness and increased uninhibited behavior.
  • Perceived Social Support and Coping Mechanisms : Adequate social support enhances effective coping strategies in challenging situations.
  • Group Polarization and Risky Decision Making : Group discussions intensify individuals’ pre-existing inclinations, leading to riskier decisions.
  • Self-Esteem and Social Comparison : Individuals with lower self-esteem are more prone to engaging in negative social comparison.
  • Cultural Norms and Nonverbal Communication : Cultural norms influence nonverbal communication cues and interpretations.

Alternative Psychology Hypothesis Statement Examples : Explore alternative hypothesis perspectives on psychological phenomena with these hypotheses. These examples challenge conventional wisdom and encourage critical thinking, providing a fresh outlook on various aspects of human behavior, cognition, and emotions.

  • Nonverbal Communication and Introversion : Nonverbal cues may play a more significant role in communication for introverted individuals.
  • Perceived Control and External Locus of Control : High perceived control may lead to an external locus of control in certain situations.
  • Cognitive Dissonance and Reinforcement Theory : Cognitive dissonance can be explained through the lens of reinforcement theory.
  • Bystander Effect and Social Responsibility : The bystander effect may stem from individuals’ heightened sense of social responsibility.
  • Emotion Regulation and Emotional Suppression : Emotion regulation strategies like emotional suppression might lead to long-term emotional well-being.
  • Perceived Social Support and Emotional Independence : Adequate social support may contribute to emotional independence rather than dependence.
  • Cultural Identity and Interpersonal Conflict : Strong cultural identity might lead to increased interpersonal conflict due to differing values.
  • Parenting Styles and Personality Development : Parenting styles might have a limited impact on the formation of certain personality traits.
  • Social Media Use and Positive Self-Presentation : Extensive social media use may lead to a more authentic self-presentation.
  • Attentional Bias and Cognitive Flexibility : Attentional bias might enhance cognitive flexibility in specific cognitive tasks.

Psychology Hypothesis Statement Examples in Research : Explore the realms of psychological research hypothesis that guide scientific inquiry. These examples span various subfields of psychology, offering insights into human behavior, cognition, and emotions through the lens of empirical investigation.

  • Effects of Meditation on Mindfulness : Regular meditation practice enhances individuals’ mindfulness levels.
  • Impact of Parenting Styles on Self-Esteem : Parenting styles significantly influence children’s self-esteem development.
  • Emotion Regulation Strategies and Anxiety Levels : Effective emotion regulation strategies lead to decreased anxiety levels.
  • Cultural Identity and Academic Achievement : Strong cultural identity positively impacts academic achievement in multicultural settings.
  • Influence of Peer Pressure on Risky Behavior : Peer pressure increases engagement in risky behaviors among adolescents.
  • Effects of Social Support on Depression : Adequate social support leads to decreased depression symptoms in individuals.
  • Mindfulness Meditation and Attention Span : Regular mindfulness practice improves individuals’ attention span and focus.
  • Attachment Style and Romantic Satisfaction : Attachment style predicts satisfaction levels in romantic relationships.
  • Effects of Positive Feedback on Motivation : Positive feedback enhances intrinsic motivation for challenging tasks.
  • Impact of Sleep Quality on Memory Consolidation : Better sleep quality leads to improved memory consolidation during sleep.

Experimental Research in Psychology Hypothesis Examples : Embark on experimental journeys with hypotheses that guide controlled investigations into psychological phenomena. These examples facilitate the design and execution of experiments, allowing researchers to manipulate variables, observe outcomes, and draw evidence-based conclusions.

  • Effects of Color on Mood : Exposure to warm colors enhances positive mood, while cool colors evoke calmness.
  • Impact of Visual Distractions on Concentration : Visual distractions negatively affect individuals’ ability to concentrate on tasks.
  • Influence of Music Tempo on Heart Rate : Upbeat music tempo leads to increased heart rate and arousal.
  • Effects of Humor on Stress Reduction : Humor interventions reduce stress levels and increase feelings of relaxation.
  • Impact of Exercise on Cognitive Function : Regular aerobic exercise improves cognitive function and memory retention.
  • Influence of Social Norms on Helping Behavior : Observing prosocial behavior in others increases individuals’ likelihood of offering help.
  • Effects of Sleep Duration on Reaction Time : Longer sleep duration leads to faster reaction times in cognitive tasks.
  • Impact of Positive Affirmations on Self-Esteem : Repeating positive affirmations boosts self-esteem and self-confidence.
  • Influence of Noise Levels on Task Performance : High noise levels impair individuals’ performance on cognitive tasks.
  • Effects of Temperature on Aggressive Behavior : Elevated temperatures lead to an increase in aggressive behavior.

Psychology Hypothesis Tentative Statement Examples : Embark on the journey of exploration and inquiry with these tentative hypotheses. These examples reflect the initial assumptions and predictions that researchers formulate before conducting in-depth investigations, paving the way for further study and empirical examination.

  • Possible Effects of Mindfulness on Stress Reduction : Mindfulness practices might contribute to reduced stress levels in individuals.
  • Potential Impact of Social Media Use on Loneliness : Extensive social media use could be linked to increased feelings of loneliness.
  • Tentative Connection Between Personality Traits and Leadership Styles : Certain personality traits may align with specific leadership approaches.
  • Potential Relationship Between Parenting Styles and Academic Motivation : Different parenting styles might influence students’ motivation for academics.
  • Hypothesized Impact of Cognitive Training on Memory Enhancement : Cognitive training interventions may lead to improved memory function.
  • Preliminary Association Between Emotional Intelligence and Conflict Resolution : Higher emotional intelligence might be related to more effective conflict resolution.
  • Possible Effects of Music Exposure on Emotional Regulation : Listening to music might impact individuals’ ability to regulate emotions.
  • Tentative Link Between Self-Esteem and Resilience : Higher self-esteem may contribute to increased resilience in the face of challenges.
  • Potential Connection Between Cultural Exposure and Empathy Levels : Exposure to diverse cultures might influence individuals’ empathetic understanding.
  • Tentative Association Between Sleep Quality and Cognitive Performance : Better sleep quality could be linked to improved cognitive function.

Psychology Hypothesis Development Statement Examples : Formulate hypotheses that lay the groundwork for deeper exploration and understanding. These examples illustrate the process of hypothesis development, where researchers craft well-structured statements that guide empirical investigations and contribute to the advancement of psychological knowledge.

  • Development of a Hypothesis on Emotional Intelligence and Workplace Performance : Emotional intelligence positively influences workplace performance through enhanced interpersonal interactions and adaptive coping mechanisms.
  • Constructing a Hypothesis on Social Media Use and Well-being : Extensive social media use negatively impacts psychological well-being by fostering social comparison, reducing real-life social interactions, and increasing feelings of inadequacy.
  • Formulating a Hypothesis on Attachment Styles and Relationship Satisfaction : Secure attachment styles correlate positively with higher relationship satisfaction due to increased trust, effective communication, and emotional support.
  • Creating a Hypothesis on Parenting Styles and Child Aggression : Authoritative parenting styles lead to reduced child aggression through the cultivation of emotional regulation skills, consistent discipline, and nurturance.
  • Developing a Hypothesis on Cognitive Biases and Decision Making : Cognitive biases influence decision-making processes by shaping information processing, leading to deviations from rational decision-making models.
  • Constructing a Hypothesis on Cultural Identity and Psychological Well-being : Strong cultural identity positively impacts psychological well-being by fostering a sense of belonging, social support, and cultural pride.
  • Formulating a Hypothesis on Attachment Style and Coping Mechanisms : Attachment style influences coping mechanisms in response to stress, with secure attachments leading to adaptive strategies and insecure attachments resulting in maladaptive ones.
  • Creating a Hypothesis on Self-Efficacy and Academic Performance : High self-efficacy predicts better academic performance due to increased motivation, perseverance, and effective learning strategies.
  • Developing a Hypothesis on Gender Stereotypes and Career Aspirations : Gender stereotypes negatively impact women’s career aspirations by reinforcing traditional gender roles and limiting their perceived competence in certain fields.
  • Constructing a Hypothesis on Cultural Exposure and Empathy Levels : Exposure to diverse cultures enhances empathy levels by fostering cross-cultural understanding, reducing ethnocentrism, and promoting perspective-taking.

These psychology hypothesis development statement examples showcase the critical process of crafting hypotheses that guide research investigations and contribute to the depth and breadth of psychological knowledge.  In addition, you should review our  biology hypothesis .

How Do You Write a Psychology Hypothesis Statement? – Step by Step Guide

Crafting a psychology hypothesis statement is a crucial step in formulating research questions and hypothesis designing empirical investigations. A well-structured hypothesis guides your research, helping you explore, analyze, and understand psychological phenomena. Follow this step-by-step guide to create effective psychology hypothesis statements:

  • Identify Your Research Question : Start by identifying the specific psychological phenomenon or relationship you want to explore. Your hypothesis should address a clear research question.
  • Choose the Appropriate Type of Hypothesis : Decide whether your hypothesis will be directional (predicting a specific relationship) or non-directional (predicting a relationship without specifying its direction).
  • State Your Variables : Clearly identify the independent variable (the factor you’re manipulating or examining) and the dependent variable (the outcome you’re measuring).
  • Write a Null Hypothesis (If Applicable) : If your research involves comparing groups or conditions, formulate a null hypothesis that states there’s no significant difference or relationship.
  • Formulate the Hypothesis : Craft a clear and concise statement that predicts the expected relationship between your variables. Use specific language and avoid vague terms.
  • Use Clear Language : Write your hypothesis in a simple, straightforward manner that is easily understandable by both researchers and readers.
  • Ensure Testability : Your hypothesis should be testable through empirical research. It should allow you to collect data, analyze results, and draw conclusions.
  • Consider the Population : Specify the population you’re studying (e.g., adults, adolescents, specific groups) to make your hypothesis more precise.
  • Be Falsifiable : A good hypothesis can be proven false through empirical evidence. Avoid making statements that cannot be tested or verified.
  • Revise and Refine : Review your hypothesis for clarity, coherence, and accuracy. Make revisions as needed to ensure it accurately reflects your research question.

Tips for Writing a Psychology Hypothesis

Writing an effective psychology hypothesis statement requires careful consideration and attention to detail. Follow these tips to craft compelling hypotheses:

  • Be Specific : Clearly define your variables and the expected relationship between them. Avoid vague or ambiguous language.
  • Avoid Bias : Ensure your hypothesis is objective and unbiased. Avoid making assumptions or including personal opinions.
  • Use Measurable Terms : Use terms that can be quantified and measured in your research. This makes data collection and analysis more manageable.
  • Consult Existing Literature : Review relevant literature to ensure your hypothesis aligns with existing research and theories in the field.
  • Consider Alternative Explanations : Acknowledge other potential explanations for your findings and consider how they might influence your hypothesis.
  • Stay Consistent : Keep your hypothesis consistent with the overall research question and objectives of your study.
  • Keep It Concise : Write your hypothesis in a concise manner, avoiding unnecessary complexity or jargon.
  • Test Your Hypothesis : Consider how you would test your hypothesis using empirical methods. Ensure it’s feasible and practical to gather data to support or refute it.
  • Seek Feedback : Share your hypothesis with peers, mentors, or advisors to receive constructive feedback and suggestions for improvement.
  • Refine as Needed : As you gather data and analyze results, be open to revising your hypothesis based on the evidence you uncover.

Crafting a psychology hypothesis statement is a dynamic process that involves careful thought, research, and refinement. A well-constructed hypothesis sets the stage for rigorous and meaningful scientific inquiry in the field of psychology.

AI Generator

Text prompt

  • Instructive
  • Professional

10 Examples of Public speaking

20 Examples of Gas lighting

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Hypothesis Testing | A Step-by-Step Guide with Easy Examples

Published on November 8, 2019 by Rebecca Bevans . Revised on June 22, 2023.

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics . It is most often used by scientists to test specific predictions, called hypotheses, that arise from theories.

There are 5 main steps in hypothesis testing:

  • State your research hypothesis as a null hypothesis and alternate hypothesis (H o ) and (H a  or H 1 ).
  • Collect data in a way designed to test the hypothesis.
  • Perform an appropriate statistical test .
  • Decide whether to reject or fail to reject your null hypothesis.
  • Present the findings in your results and discussion section.

Though the specific details might vary, the procedure you will use when testing a hypothesis will always follow some version of these steps.

Table of contents

Step 1: state your null and alternate hypothesis, step 2: collect data, step 3: perform a statistical test, step 4: decide whether to reject or fail to reject your null hypothesis, step 5: present your findings, other interesting articles, frequently asked questions about hypothesis testing.

After developing your initial research hypothesis (the prediction that you want to investigate), it is important to restate it as a null (H o ) and alternate (H a ) hypothesis so that you can test it mathematically.

The alternate hypothesis is usually your initial hypothesis that predicts a relationship between variables. The null hypothesis is a prediction of no relationship between the variables you are interested in.

  • H 0 : Men are, on average, not taller than women. H a : Men are, on average, taller than women.

Prevent plagiarism. Run a free check.

For a statistical test to be valid , it is important to perform sampling and collect data in a way that is designed to test your hypothesis. If your data are not representative, then you cannot make statistical inferences about the population you are interested in.

There are a variety of statistical tests available, but they are all based on the comparison of within-group variance (how spread out the data is within a category) versus between-group variance (how different the categories are from one another).

If the between-group variance is large enough that there is little or no overlap between groups, then your statistical test will reflect that by showing a low p -value . This means it is unlikely that the differences between these groups came about by chance.

Alternatively, if there is high within-group variance and low between-group variance, then your statistical test will reflect that with a high p -value. This means it is likely that any difference you measure between groups is due to chance.

Your choice of statistical test will be based on the type of variables and the level of measurement of your collected data .

  • an estimate of the difference in average height between the two groups.
  • a p -value showing how likely you are to see this difference if the null hypothesis of no difference is true.

Based on the outcome of your statistical test, you will have to decide whether to reject or fail to reject your null hypothesis.

In most cases you will use the p -value generated by your statistical test to guide your decision. And in most cases, your predetermined level of significance for rejecting the null hypothesis will be 0.05 – that is, when there is a less than 5% chance that you would see these results if the null hypothesis were true.

In some cases, researchers choose a more conservative level of significance, such as 0.01 (1%). This minimizes the risk of incorrectly rejecting the null hypothesis ( Type I error ).

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

The results of hypothesis testing will be presented in the results and discussion sections of your research paper , dissertation or thesis .

In the results section you should give a brief summary of the data and a summary of the results of your statistical test (for example, the estimated difference between group means and associated p -value). In the discussion , you can discuss whether your initial hypothesis was supported by your results or not.

In the formal language of hypothesis testing, we talk about rejecting or failing to reject the null hypothesis. You will probably be asked to do this in your statistics assignments.

However, when presenting research results in academic papers we rarely talk this way. Instead, we go back to our alternate hypothesis (in this case, the hypothesis that men are on average taller than women) and state whether the result of our test did or did not support the alternate hypothesis.

If your null hypothesis was rejected, this result is interpreted as “supported the alternate hypothesis.”

These are superficial differences; you can see that they mean the same thing.

You might notice that we don’t say that we reject or fail to reject the alternate hypothesis . This is because hypothesis testing is not designed to prove or disprove anything. It is only designed to test whether a pattern we measure could have arisen spuriously, or by chance.

If we reject the null hypothesis based on our research (i.e., we find that it is unlikely that the pattern arose by chance), then we can say our test lends support to our hypothesis . But if the pattern does not pass our decision rule, meaning that it could have arisen by chance, then we say the test is inconsistent with our hypothesis .

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Descriptive statistics
  • Measures of central tendency
  • Correlation coefficient

Methodology

  • Cluster sampling
  • Stratified sampling
  • Types of interviews
  • Cohort study
  • Thematic analysis

Research bias

  • Implicit bias
  • Cognitive bias
  • Survivorship bias
  • Availability heuristic
  • Nonresponse bias
  • Regression to the mean

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

A hypothesis states your predictions about what your research will find. It is a tentative answer to your research question that has not yet been tested. For some research projects, you might have to write several hypotheses that address different aspects of your research question.

A hypothesis is not just a guess — it should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations and statistical analysis of data).

Null and alternative hypotheses are used in statistical hypothesis testing . The null hypothesis of a test always predicts no effect or no relationship between variables, while the alternative hypothesis states your research prediction of an effect or relationship.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bevans, R. (2023, June 22). Hypothesis Testing | A Step-by-Step Guide with Easy Examples. Scribbr. Retrieved April 10, 2024, from https://www.scribbr.com/statistics/hypothesis-testing/

Is this article helpful?

Rebecca Bevans

Rebecca Bevans

Other students also liked, choosing the right statistical test | types & examples, understanding p values | definition and examples, what is your plagiarism score.

6 Hypothesis Examples in Psychology

The hypothesis is one of the most important steps of psychological research. Hypothesis refers to an assumption or the temporary statement made by the researcher before the execution of the experiment, regarding the possible outcome of that experiment. A hypothesis can be tested through various scientific and statistical tools. It is a logical guess based on previous knowledge and investigations related to the problem under investigation. In this article, we’ll learn about the significance of the hypothesis, the sources of the hypothesis, and the various examples of the hypothesis.

Sources of Hypothesis

The formulation of a good hypothesis is not an easy task. One needs to take care of the various crucial steps to get an accurate hypothesis. The hypothesis formulation demands both the creativity of the researcher and his/her years of experience. The researcher needs to use critical thinking to avoid committing any errors such as choosing the wrong hypothesis. Although the hypothesis is considered the first step before further investigations such as data collection for the experiment, the hypothesis formulation also requires some amount of data collection. The data collection for the hypothesis formulation refers to the review of literature related to the concerned topic, and understanding of the previous research on the related topic. Following are some of the main sources of the hypothesis that may help the researcher to formulate a good hypothesis.

  • Reviewing the similar studies and literature related to a similar problem.
  • Examining the available data concerned with the problem.
  • Discussing the problem with the colleagues, or the professional researchers about the problem under investigation.
  • Thorough research and investigation by conducting field interviews or surveys on the people that are directly concerned with the problem under investigation.
  • Sometimes ‘institution’ of the well known and experienced researcher is also considered as a good source of the hypothesis formulation.

Real Life Hypothesis Examples

1. null hypothesis and alternative hypothesis examples.

Every research problem-solving procedure begins with the formulation of the null hypothesis and the alternative hypothesis. The alternative hypothesis assumes the existence of the relationship between the variables under study, while the null hypothesis denies the relationship between the variables under study. Following are examples of the null hypothesis and the alternative hypothesis based on the research problem.

Research Problem: What is the benefit of eating an apple daily on your health?

Alternative Hypothesis: Eating an apple daily reduces the chances of visiting the doctor.

Null Hypothesis : Eating an apple daily does not impact the frequency of visiting the doctor.

Research Problem: What is the impact of spending a lot of time on mobiles on the attention span of teenagers.

Alternative Problem: Spending time on the mobiles and attention span have a negative correlation.

Null Hypothesis: There does not exist any correlation between the use of mobile by teenagers on their attention span.

Research Problem: What is the impact of providing flexible working hours to the employees on the job satisfaction level.

Alternative Hypothesis : Employees who get the option of flexible working hours have better job satisfaction than the employees who don’t get the option of flexible working hours.

Null Hypothesis: There is no association between providing flexible working hours and job satisfaction.

2. Simple Hypothesis Examples

The hypothesis that includes only one independent variable (predictor variable) and one dependent variable (outcome variable) is termed the simple hypothesis. For example, the children are more likely to get clinical depression if their parents had also suffered from the clinical depression. Here, the independent variable is the parents suffering from clinical depression and the dependent or the outcome variable is the clinical depression observed in their child/children. Other examples of the simple hypothesis are given below,

  • If the management provides the official snack breaks to the employees, the employees are less likely to take the off-site breaks. Here, providing snack breaks is the independent variable and the employees are less likely to take the off-site break is the dependent variable.

3. Complex Hypothesis Examples

If the hypothesis includes more than one independent (predictor variable) or more than one dependent variable (outcome variable) it is known as the complex hypothesis. For example, clinical depression in children is associated with a family clinical depression history and a stressful and hectic lifestyle. In this case, there are two independent variables, i.e., family history of clinical depression and hectic and stressful lifestyle, and one dependent variable, i.e., clinical depression. Following are some more examples of the complex hypothesis,

4. Logical Hypothesis Examples

If there are not many pieces of evidence and studies related to the concerned problem, then the researcher can take the help of the general logic to formulate the hypothesis. The logical hypothesis is proved true through various logic. For example, if the researcher wants to prove that the animal needs water for its survival, then this can be logically verified through the logic that ‘living beings can not survive without the water.’ Following are some more examples of logical hypotheses,

  • Tia is not good at maths, hence she will not choose the accounting sector as her career.
  • If there is a correlation between skin cancer and ultraviolet rays, then the people who are more exposed to the ultraviolet rays are more prone to skin cancer.
  • The beings belonging to the different planets can not breathe in the earth’s atmosphere.
  • The creatures living in the sea use anaerobic respiration as those living outside the sea use aerobic respiration.

5. Empirical Hypothesis Examples

The empirical hypothesis comes into existence when the statement is being tested by conducting various experiments. This hypothesis is not just an idea or notion, instead, it refers to the statement that undergoes various trials and errors, and various extraneous variables can impact the result. The trials and errors provide a set of results that can be testable over time. Following are the examples of the empirical hypothesis,

  • The hungry cat will quickly reach the endpoint through the maze, if food is placed at the endpoint then the cat is not hungry.
  • The people who consume vitamin c have more glowing skin than the people who consume vitamin E.
  • Hair growth is faster after the consumption of Vitamin E than vitamin K.
  • Plants will grow faster with fertilizer X than with fertilizer Y.

6. Statistical Hypothesis Examples

The statements that can be proven true by using the various statistical tools are considered the statistical hypothesis. The researcher uses statistical data about an area or the group in the analysis of the statistical hypothesis. For example, if you study the IQ level of the women belonging to nation X, it would be practically impossible to measure the IQ level of each woman belonging to nation X. Here, statistical methods come to the rescue. The researcher can choose the sample population, i.e., women belonging to the different states or provinces of the nation X, and conduct the statistical tests on this sample population to get the average IQ of the women belonging to the nation X. Following are the examples of the statistical hypothesis.

  • 30 per cent of the women belonging to the nation X are working.
  • 50 per cent of the people living in the savannah are above the age of 70 years.
  • 45 per cent of the poor people in the United States are uneducated.

Significance of Hypothesis

A hypothesis is very crucial in experimental research as it aims to predict any particular outcome of the experiment. Hypothesis plays an important role in guiding the researchers to focus on the concerned area of research only. However, the hypothesis is not required by all researchers. The type of research that seeks for finding facts, i.e., historical research, does not need the formulation of the hypothesis. In the historical research, the researchers look for the pieces of evidence related to the human life, the history of a particular area, or the occurrence of any event, this means that the researcher does not have a strong basis to make an assumption in these types of researches, hence hypothesis is not needed in this case. As stated by Hillway (1964)

When fact-finding alone is the aim of the study, a hypothesis is not required.”

The hypothesis may not be an important part of the descriptive or historical studies, but it is a crucial part for the experimental researchers. Following are some of the points that show the importance of formulating a hypothesis before conducting the experiment.

  • Hypothesis provides a tentative statement about the outcome of the experiment that can be validated and tested. It helps the researcher to directly focus on the problem under investigation by collecting the relevant data according to the variables mentioned in the hypothesis.
  • Hypothesis facilitates a direction to the experimental research. It helps the researcher in analysing what is relevant for the study and what’s not. It prevents the researcher’s time as he does not need to waste time on reviewing the irrelevant research and literature, and also prevents the researcher from collecting the irrelevant data.
  • Hypothesis helps the researcher in choosing the appropriate sample, statistical tests to conduct, variables to be studied and the research methodology. The hypothesis also helps the study from being generalised as it focuses on the limited and exact problem under investigation.
  • Hypothesis act as a framework for deducing the outcomes of the experiment. The researcher can easily test the different hypotheses for understanding the interaction among the various variables involved in the study. On this basis of the results obtained from the testing of various hypotheses, the researcher can formulate the final meaningful report.

Related Posts

8 Game Theory Examples in Real Life

8 Game Theory Examples in Real Life

7 Reciprocal Altruism Examples

7 Reciprocal Altruism Examples

13 Examples Of Operant Conditioning in Everyday Life

13 Examples Of Operant Conditioning in Everyday Life

Correlation Examples in Real Life

Correlation Examples in Real Life

10 Framing Effects Examples in Real Life

10 Framing Effects Examples in Real Life

25 Altruism Examples in History

25 Altruism Examples in History

Add comment cancel reply.

How to never be wrong

  • Theoretical Review
  • Published: 24 May 2018
  • Volume 26 , pages 13–28, ( 2019 )

Cite this article

  • Samuel J. Gershman 1  

10k Accesses

52 Citations

20 Altmetric

Explore all metrics

Human beliefs have remarkable robustness in the face of disconfirmation. This robustness is often explained as the product of heuristics or motivated reasoning. However, robustness can also arise from purely rational principles when the reasoner has recourse to ad hoc auxiliary hypotheses. Auxiliary hypotheses primarily function as the linking assumptions connecting different beliefs to one another and to observational data, but they can also function as a “protective belt” that explains away disconfirmation by absorbing some of the blame. The present article traces the role of auxiliary hypotheses from philosophy of science to Bayesian models of cognition and a host of behavioral phenomena, demonstrating their wide-ranging implications.

Similar content being viewed by others

good behavior hypothesis

Justification as a loaded notion

Yuval Avnur

Belief, credence, and evidence

Elizabeth Jackson

Critique of pure Bayesian cognitive science: A view from the philosophy of science

Vincenzo Crupi & Fabrizio Calzavarini

Avoid common mistakes on your manuscript.

“No theory ever agrees with all the facts in its domain, yet it is not always the theory that is to blame. Facts are constituted by older ideologies, and aclash between facts and theories may be proof of progress.” Feyerabend ( 1975 )

Introduction

Since the discovery of Uranus in 1781, astronomers were troubled by certain irregularities in its orbit, which appeared to contradict the prevailing Newtonian theory of gravitation. Then, in 1845, Le Verrier and Adams independently completed calculations showing that these irregularities could be entirely explained by the gravity of a previously unobserved planetary body. This hypothesis was confirmed a year later through telescopic observation, and thus an 8th planet (Neptune) was added to the solar system. Le Verrier and Adams succeeded on two fronts: they discovered a new planet, and they rescued the Newtonian theory from disconfirmation.

Neptune is a classic example of what philosophers of science call an ad hoc auxiliary hypothesis (Popper, 1959 ; Hempel, 1966 ). All scientific theories make use of auxiliary assumptions that allow them to interpret experimental data. For example, an astronomer makes use of optical assumptions to interpret telescope data, but one would not say that these assumptions are a core part of an astronomical theory; they can be replaced by other assumptions as the need arises (e.g., when using a different measurement device), without threatening the integrity of the theory. An auxiliary assumption becomes an ad hoc hypothesis when it entails unconfirmed claims that are specifically designed to accommodate disconfirmatory evidence.

Ad hoc auxiliary hypotheses have long worried philosophers of science, because they suggest a slippery slope toward unfalsifiability (Harding, 1976 ). If any theory can be rescued in the face of disconfirmation by changing auxiliary assumptions, how can we tell good theories from bad theories? While Le Verrier and Adams were celebrated for their discovery, many other scientists were less fortunate. For example, in the late 19th century, Michelson and Morley reported experiments apparently contradicting the prevailing theory that electromagnetic radiation is propagated through a space-pervading medium (ether). FitzGerald and Lorentz attempted to rescue this theory by hypothesizing electrical effects of ether that were of exactly the right magnitude to produce the Michelson and Morley results. Ultimately, the ether theory was abandoned, and Popper ( 1959 ) derided the FitzGerald–Lorentz explanation as “unsatisfactory” because it “merely served to restore agreement between theory and experiment.”

Ironically, Le Verrier himself was misled by an ad hoc auxiliary hypothesis. The same methodology that had served him so well in the discovery of Neptune failed catastrophically in his “discovery” of Vulcan, a hypothetical planet postulated to explain excess precession in Mercury’s orbit. Le Verrier died convinced that Vulcan existed, and many astronomers subsequently reported sightings of the planet, but the hypothesis was eventually discredited by Einstein’s theory of general relativity, which accounted precisely for the excess precession without recourse to an additional planet.

The basic problem posed by these examples is how to assign credit or blame to central hypotheses vs. auxiliary hypotheses. An influential view, known as the Duhem–Quine thesis (reviewed in the next section), asserts that this credit assignment problem is insoluble—central and auxiliary hypotheses must face observational data “as a corporate body” (Quine, 1951 ). This thesis implies that theories will be resistant to disconfirmation as long as they have recourse to ad hoc auxiliary hypotheses.

Psychologists recognize such resistance as a ubiquitous cognitive phenomenon, commonly viewed as one among many flaws in human reasoning (Gilovich, 1991 ). However, as the Neptune example attests, such hypotheses can also be instruments for discovery. The purpose of this paper is to discuss how a Bayesian framework for induction deals with ad hoc auxiliary hypotheses (Dorling, 1979 ; Earman, 1992 ; Howson and Urbach, 2006 ; Strevens, 2001 ), and then to leverage this framework to understand a range of phenomena in human cognition. According to the Bayesian framework, resistance to disconfirmation can arise from rational belief-updating mechanisms, provided that an individual’s “intuitive theory” satisfies certain properties: a strong prior belief in the central hypothesis, coupled with an inductive bias to posit auxiliary hypotheses that place high probability on observed anomalies. The question then becomes whether human intuitive theories satisfy these properties, and several lines of evidence suggest the answer is yes. Footnote 1 In this light, humans are surprisingly rational. Human beliefs are guided by strong inductive biases about the world. These biases enable the development of robust intuitive theories, but can sometimes lead to preposterous beliefs.

Underdetermination of theories: the Duhem–Quine thesis

Theories (both scientific and intuitive) are webs of interconnected hypotheses about the world. Thus, one often cannot confirm or disconfirm one hypothesis without affecting the validity of the other hypotheses. How, then, can we establish the validity of an individual hypothesis? (Duhem, 1954 ) brought this issue to the foreground in his famous treatment of theoretical physics:

The physicist can never subject an isolated hypothesis to experimental test, but only awhole group of hypotheses; when the experiment is in disagreement with his predictions, what he learns is that at least one of the hypotheses constituting this group is unacceptable and ought to be modified; but the experiment does not designate which one should be changed. (p. 187)

While Duhem restricted his attention to theoretical physics, Quine ( 1951 ) took the same point to its logical extreme, asserting that all beliefs about the world are underdetermined by observational data:

The totality of our so-called knowledge or beliefs, from the most casual matters of geography and history to the profoundest laws of atomic physics or even of pure mathematics and logic, is aman-made fabric, which impinges on experience only along the edges. Or, to change the figure, total science is like afield of force whose boundary conditions are experience. Aconflict with experience at the periphery occasions readjustments in the interior of the field. But the total field is so underdetermined by its boundary conditions, experience, that there is much latitude of choice as to what statements to reevaluate in the light of any single contrary experience. No particular experiences are linked with any particular statements in the interior of the field, except indirectly through considerations of equilibrium affecting the field as awhole. (p. 42-43)

In other words, one cannot unequivocally identify particular beliefs to revise in light of surprising observations. Quine’s conclusion was stark: “The unit of empirical significance is the whole of science” (p. 42).

Some philosophers have taken undetermination to invite a radical critique of theory-testing. If evidence cannot adjudicate between theories, then non-empirical forces, emanating from the social and cultural environment of scientists, must drive theory change. For example, the “research programs” of Lakatos ( 1976 ) and the “paradigms” of Kuhn ( 1962 ) were conceived as explanations of why scientists often stick to a theory despite disconfirming evidence, sometimes for centuries. Lakatos posited that scientific theories contain a hard core of central theses that are immunized from refutation by a “protective belt” of auxiliary hypotheses. On this view, science does not progress by falsification of individual theories, but rather by developing a sequence of theories that progressively add novel predictions, some of which are corroborated by empirical data.

While the radical consequences of underdetermination have been disputed (e.g., Grünbaum, 1962 ; Laudan, 1990 ), the problem of credit assignment remains a fundamental challenge for the scientific enterprise. I now turn to a Bayesian approach to induction that attempts to answer this challenge.

The Bayesian answer to underdetermination

Probability theory offers a coherent approach to credit assignment (Howson and Urbach, 2006 ). Instead of assigning all credit to either central or auxiliary hypotheses, probability theory dictates that credit should be apportioned in a graded manner according to the “responsibility” each hypothesis takes for the data. More formally, let h denote the central hypothesis, a denote the auxiliary hypothesis, and d denote the data. After observing d , the prior probability of the conjunct ha , P ( h a ), is updated to the posterior distribution P ( h a | d ) according to Bayes’ rule:

where P ( d | h a ) is the likelihood of the data under ha , and ¬( h a ) denotes the negation of ha .

The sum rule of probability allows us to ascertain the updated belief about the central hypothesis, marginalizing over all possible auxiliaries:

Likewise, the marginal posterior over the auxiliary is given by:

This formulation is the crux of the Bayesian answer to underdetermination (Dorling, 1979 ; Earman, 1992 ; Howson & Urbach, 2006 ; Strevens, 2001 ). A Bayesian scientist does not wholly credit either the central or auxiliary hypotheses, but rather distributes the credit according to the marginal posterior probabilities.

This analysis does not make a principled distinction between central and auxiliary hypotheses: they act conjunctively, and are acted upon in the same way by the probability calculus. What ultimately matters for distinguishing them, as illustrated below, is the relative balance of evidence for the different hypotheses. Central hypotheses will typically be more entrenched due to a stronger evidential foundation, and thus auxiliary hypotheses will tend to be the element’s of Quine’s “total field” that readjust in the face of disconfirmation.

I will not address here the philosophical controversies that have surrounded the Bayesian analysis of auxiliary hypotheses (Fitelson and Waterman, 2005 ; Mayo, 1997 ). My goal is not to establish the normative adequacy of the Bayesian analysis, but rather to explore its implications for cognition—in particular, how it helps us understand resistance to belief updating.

Following Strevens ( 2001 ), I illustrate the dynamics of belief by assuming that the data d has its impact on the posterior probability of the central hypothesis h solely through its falsification of the conjunct ha Footnote 2 :

In other words, the likelihood is 0 for ha and 1 for all other conjuncts. Under this assumption, Strevens obtains the following expression:

This expression has several intuitive properties, illustrated in Fig.  1 . As one would expect, the posterior probability of h always decreases following disconfirmatory data d . The decrease in the posterior probability is inversely proportional to P ( h ) and directly proportional to P ( a | h ). Footnote 3 Thus, a central hypothesis with high prior probability relative to the auxiliary hypothesis [i.e., high P ( h )/ P ( a | h )] will be relatively robust to disconfirmation, pushing blame onto the auxiliary. But if the auxiliary has sufficiently high prior probability, the central hypothesis will be forced to absorb the blame. It is important to see that the robustness to disconfirmation conferred by a strong prior is not a bias due to motivated reasoning (Kunda, 1990 )—it is a direct consequence of rational inference. This will be a key point reiterated throughout the paper. Footnote 4

figure 1

Simulations . Ratio of posterior to prior probability of the central hypothesis h as a function of the probability of the auxiliary hypothesis a given h , plotted for three different priors for the central hypothesis. Adapted from Strevens ( 2001 )

One might wonder how this analysis determines whether an auxiliary hypothesis is ad hoc or not. The answer is that it doesn’t: the only distinguishing features of hypotheses are their prior probabilities and their likelihoods. Thus, on this account “ad hoc” is simply a descriptive label that we use to individuate hypotheses that have low prior probability and high likelihoods. By the same token, a “good” versus “bad” ad hoc auxiliary hypothesis is determined entirely by the prior and likelihood.

Robustness of intuitive theories

One strong assumption underlying this analysis is worth highlighting, namely that the likelihood of h ¬ a , marginalizing over all alternative auxiliaries ( a k ), is equal to 1:

I will refer to this as the consistency assumption , because it states that only auxiliary hypotheses that are highly consistent with the data will have non-zero probability. Mathematically, this means that P ( a k ) > 0 if and only if P ( d | h a k ) = 1. Ad hoc auxiliary hypotheses, by design, have the property that P ( d | h a k ) ≈ 1. But why should these hypotheses be preferred over others? One way to justify this assumption is to stipulate that there is uncertainty about the parameters of the distribution over auxiliary hypotheses. The prior over these parameters can express a preference for redistributing probability mass (i.e., assigning credit) in particular ways once data are observed.

Concretely, let 𝜃 denote the parameter vector of the multinomial distribution over auxiliaries. Because we have uncertainty about 𝜃 in addition to h and a , we need to marginalize over 𝜃 to obtain the posterior P ( h | d ):

As detailed in the  Appendix , choosing P ( 𝜃 ) to be a sparse Dirichlet distribution has the effect that P ( d | h ¬ a ) ≈ 1. A sparse Dirichlet distribution places most of its probability mass on multinomial distributions with low entropy (i.e., those that favor a small set of auxiliary hypotheses). After observing d , the marginal distribution P ( a k | d ) will place most of its probability mass on auxiliary hypotheses that are consistent with the data. In other words, the assumption of sparsity licenses us to discount all the auxiliary hypotheses that are inconsistent with the data. The remaining auxiliaries may appear as though they are ad hoc, but in fact they are the only ones that survive the cull of rational inference.

In addition to sparsity, the consistency assumption requires deterministic hypotheses: P ( d | h a k ) must be close to 1 if a k is to be considered plausible (see  A ). If hypotheses are allowed to probabilistically predict the data, then P ( d | h ¬ a ) < 1. In summary, sparsity and determinism jointly facilitate the robustness of theories. In this section, I will argue that these properties characterize human intuitive theories.

The sparsity assumption—that only a few auxiliary hypotheses have high probability—has appeared throughout cognitive science in various guises. Klayman and Ha ( 1987 ) posited a minority phenomenon assumption, according to which the properties that are characteristic of a hypothesis tend to be rare. For example, AIDS is rare in the population but highly correlated with HIV; hence observing that someone has AIDS is highly informative about whether they have HIV. Klayman and Ha ( 1987 ) invoked this assumption to justify the “positive test strategy” prevalent in human hypothesis testing. If people seek confirmation for their hypotheses, then failure to observe the confirmatory evidence will provide strong evidence against the hypothesis under the minority phenomenon assumption. Oaksford and Chater ( 1994 ) used the same idea (what they called the rarity assumption ) to explain the use of the positive test strategy in the Wason card selection task. Violations of the sparsity assumption, or contextual information that changes perceived sparsity, causes people to shift away from the positive test strategy Hendrickson, Navarro, and Perfors ( 2016 ) and McKenzie, Ferreira, Mikkelsen, McDermott, and Skrable ( 2001 ).Experiments on hypothesis evaluation tell a similar story: the evidential impact of observations is greater when they are rare (Mckenzie & Mikkelsen, 2000 ; McKenzie & Mikkelsen, 2007 ), consistent with the assumption that hypotheses are sparse.

Beyond hypothesis testing and evaluation, evidence suggests that people tend to generate sparse hypotheses when presented with data. For example, Perfors and Navarro ( 2009 ) asked participants to generate hypothetical number concepts applicable to the range [1,1000], and found that most of these hypotheses were sparse. For example, a common hypothesis was prime numbers, with a sparsity of 0.168 (i.e., 16.8 % of numbers in [0,1000] are primes). Overall, 83 % of the generated hypotheses had a sparsity level of 0.2 or less. Footnote 5

Sparsity has also figured prominently in theories of perception. Olshausen and Field ( 1996 ) accounted for the tuning properties of receptive fields in primary visual cortex by assuming that they represent a sparse set of image components. Similar sparse coding ideas have been applied to auditory (Hromádka, DeWeese, & Zador, 2008 ) and olfactory (Poo & Isaacson, 2009 ) cortical representations. Psychologists have likewise posited that humans parse complex objects into a small set of latent components with distinctive visual features (Austerweil & Griffiths, 2013 ; Biederman, 1987 ).

Is sparsity a reasonable assumption? (Navarro & Perfors, 2011 ) attempted to answer this question by demonstrating that (under some fairly generic assumptions) sparsity is a consequence of family resemblance : hypotheses tend to generate data that are more similar to one another than to data generated by other hypotheses. For example, members of the same natural category tend to have more overlapping features relative to members of other natural categories (Rosch, 1978 ). Navarro and Perfors ( 2011 ) further showed that natural categories are empirically sparse. Thus, the sparsity assumption may be inevitable if hypotheses describe natural categories.

Determinism

The determinism assumption—that hypotheses tend to generate data near-deterministically—is well supported as a property of intuitive theories. Some of the most compelling evidence comes from studies of children showing that children will posit a latent cause to explain surprising events, rather than attribute the surprising event to inherent stochasticity (Schulz & Sommerville, 2006 ; Muentener & Schulz, 2014 ; Wu et al., 2015 ; Saxe et al., 2005 ; Buchanan & Sobel, 2011 ). For example, Schulz and Sommerville ( 2006 ) presented 4-year-olds with a stochastic generative cause and found that the children inferred an inhibitory cause to “explain away” the stochasticity. Children also expect latent agents to be the cause of surprising motion events, even in the absence of direct evidence for an agent (Saxe et al., 2005 ). Like children, adults also appear to prefer deterministic hypotheses (Mayrhofer & Waldmann, 2015 ; Frosch & Johnson-Laird, 2011 ). Footnote 6 The prevalent use of the positive test strategy in information selection has also been justified using the determinism assumption (Austerweil & Griffiths, 2011 ).

Lu, Yuille, Liljeholm, Cheng, and Holyoak ( 2008 ) have proposed a “generic prior” for causal strength that combines the sparsity and determinism principles. A priori , causes are expected to be few in number and potent in their generative or preventative effects. Lu et al., ( 2008 ) showed quantitatively that this prior, when employed in a Bayesian framework for causal induction, provides a good description of human causal inferences. Footnote 7 Buchanan, Tenenbaum, and Sobel ( 2010 ) developed an alternative deterministic causal model based on an edge replacement process, which creates a branching structure of stochastic latent variables. This model can explain violations of conditional independence in human judgments in terms of the correlations induced by the latent variables.

In summary, sparsity and determinism appear to be central properties of intuitive theories. These properties offer support for the particular Bayesian analysis of auxiliary hypotheses elaborated above, according to which robustness of theories derives from the ability to explain away disconfirmatory data by invoking auxiliary hypotheses.

Implications

Having established the plausibility of the Bayesian analysis, we now explore some of its implications for human cognition. The central theme running through all of these examples is that the evidential impact of observations is contingent on the auxiliary hypotheses one holds; changing one’s beliefs about auxiliary hypotheses will change the interpretation of observations. Thus, observations that appear to contradict a central hypothesis can be “explained away” by changing auxiliary hypotheses, and this change is licensed by the Bayesian analysis under the specific circumstances detailed above. If, as I have argued, intuitive theories have the right sort of properties to support this “protective belt” of auxiliary hypotheses (cf. Lakatos, 1976 ), then we should expect robustness to disconfirmation across many domains.

Before proceeding, it is important to note that many of the phenomenon surveyed below can also be explained by other theoretical frameworks, such as motivated cognition (Kunda, 1990 ). The purpose of this section is not to develop a watertight case for the Bayesian framework—which would require more specific model specifications for different domains and new experiments to test rival predictions—but rather to show that evidence for robustness to disconfirmation does not by itself indicate irrationality; it is possible to conceive of a perfectly rational agent who exhibits such behavior. Whether humans really are rational in this way is an unresolved empirical question. Footnote 8

The theory-ladenness of observation

“It is quite wrong to try founding atheory on observable magnitudes alone. In reality the very opposite happens. It is the theory which decides what we can observe.” (Albert Einstein)

Drawing a comparison between the history of science and perceptual psychology, Kuhn ( 1962 ) argued that observation reports are not theory-neutral: “What a man sees depends both upon what he looks at and also upon what his previous visual-conceptual experience has taught him to see” (p 113). For example, subjects who put on goggles with inverting lenses see the world upside-down, but after a period of profound disorientation lasting several days, their perception adapts and they see the world right-side-up (Stratton, 1897 ). Thus, the very same retinal image produces starkly different percepts depending on the preceding perceptual history.

More important for Kuhn’s argument are examples where percepts, or at least their semantic interpretations, are influenced by the observer’s conceptual framework:

Looking at acontour map, the student sees lines on paper, the cartographer apicture of aterrain. Looking at abubble-chamber photograph, the student sees confused and broken lines, the physicist arecord of familiar subnuclear events. Only after anumber of such transformations of vision does the student become an inhabitant of the scientist’s world, seeing what the scientist sees and responding as the scientist does. The world that the student then enters is not, however, fixed once and for all by the nature of the environment, on the one hand, and of science, on the other. Rather, it is determined jointly by the environment and the particular normal-scientific tradition that the student has been trained to pursue. (Kuhn, 1962 , pp. 111–112)

This is essentially a restatement of the view, going back to Helmholtz ( 1867 ), that perception is a form of “unconscious inference” or “problem-solving” (Gregory, 1970 ; Rock, 1983 ) and formalized by modern Bayesian theories of perception (Knill and Richards, 1996 ). Footnote 9

There is one particular form of theory-ladenness that will concern us here, where changes in auxiliary hypotheses alter the interpretation of observations. Disconfirmation can be transformed into confirmation (e.g., the example of Neptune), or vice versa. When Galileo first reported his observations of mountains on the moon, the critical response focused not on the observations per se but on the auxiliary assumptions mediating their validity. Since the telescope was an unfamiliar measurement device, the optical theory underlying its operation was not taken for granted. In fact, it was non-trivial even to verify Galileo’s observations, because many of the other telescopes available in 1610 were of insufficient quality to resolve the same lunar details observed by Galileo. Thus, it was possible at that time to dispute the evidential impact of Galileo’s observations for astronomical theories (see Bovens and Hartmann ( 2002 ) for a detailed analysis of how beliefs about the unreliability of measurement instruments affects reasoning about auxiliary hypotheses).

Although Galileo’s observations were ultimately vindicated, there are other historical examples in which observations were ultimately discredited. For example, Rutherford and Pettersson conducted similar experiments in the 1920s on the emission of charged particles under radioactive bombardment. Pettersson’s assistants observed flashes on a scintillation screen (evidence for emission) whereas Rutherford’s assistants did not. The controversy was subsequently resolved when Rutherford’s colleague, James Chadwick, demonstrated that Pettersson’s assistants were unreliable: they reported indistinguishable rates of flashes even under experimental conditions where no particles could have been emitted. The strategy of debunking claims by undermining auxiliary hypotheses has been used effectively throughout scientific history, from Benjamin Franklin’s challenge of Mesmer’s “animal magnetism” to the revelation that observations of neutrinos exceeding the speed of light were due to faulty detectors. Footnote 10

It is tempting to see a similar strategy at work in contemporary political and scientific debate. In response to negative news coverage, the Trump administration promulgated the idea that the mainstream media is publishing “fake news”—i.e., reports that are inaccurate, unreliable, or biased. This strategy is powerful because it does not focus on the veracity of any one report, but instead attempts to undermine faith in the entire “measurement device.” A similar strategy was used for many years by creationists to undermine faith in evolutionary biology, by the tobacco industry to undermine faith in scientific studies of smoking’s health effects, and by the fossil fuel industry to undermine faith in climate science. By “teaching the controversy,” these groups attempt to dismantle the auxiliary hypotheses on which the validity of science relies. For example, the release of stolen e-mails from the Climatic Research Unit at the University of East Anglia suggested an alternative auxiliary—selective reporting or manipulation of data—that could explain away evidence for human-induced climate change. Indeed, a subsequent survey of Americans showed that over half agreed with the statements “Scientists changed their results to make global warming appear worse than it is” and “Scientists conspired to suppress global warming research they disagreed with” (Leiserowitz, Maibach, Roser-Renouf, Smith, & Dawson, 2013 ).

A well-studied form of theory-ladenness is the phenomenon of belief polarization : individuals presented with the same data will sometimes update their beliefs in opposite directions. In a classic experiment, Lord, Ross, and Lepper ( 1979 ) asked supporters and opponents of the death penalty to read about two fictional studies—one supporting the effectiveness of the death penalty as a crime deterrent, and one supporting its ineffectiveness. Subjects who supported the death penalty subsequently strengthened their belief in the effectiveness of the death penalty after reading the two studies, whereas subjects who opposed the death penalty subsequently strengthened their belief in its ineffectiveness. A large body of empirical work on belief polarization was interpreted by many social psychologists as evidence of irrational belief updating (e.g., Nisbett and Ross, 1980 ; Kunda, 1990 ). However, another possibility is that belief polarization might arise from different auxiliary hypotheses about the data-generating process (Jern et al., 2014 ; Cook & Lewandowsky, 2016 ; Koehler, 1993 ; Jaynes, 2003 ) For example, Jern et al., ( 2014 ) showed how the findings of Lord et al., ( 1979 ) could be accounted for within a rational Bayesian framework. If participants assume the existence of research bias (distortion or selective reporting of findings to support a preconceived conclusion), then reading a study about the ineffectiveness of the death penalty may strengthen their belief in research bias, correspondingly increasing their belief in the effectiveness of the death penalty. Similarly, Cook and Lewandowsky ( 2016 ) demonstrated that beliefs in bias of scientific reporting can lead to discounting of climate change evidence. One lesson to draw from these examples is that effective persuasion requires more than simply conveying information confirming or disconfirming central hypotheses; it requires alteration of the auxiliary hypotheses that refract information, rendering perception theory-laden.

Many individuals exhibit a systematic “optimism bias” (Sharot, 2011a ), overestimating the likelihood of positive events in the future. Footnote 11 This bias affects beliefs about many real-world domains, such as the probability of getting divorced or being in a car accident. One of the puzzles of optimism is how it can be maintained; even if we start with initial optimism (cf. Stankevicius et al., 2014 ), why doesn’t reality force our beliefs to eventually calibrate themselves?

A clue to this puzzle comes from evidence that people tend to update their beliefs more in response to positive feedback compared to negative feedback (Eil & Rao, 2011 ; Sharot & Garrett, 2016 ). Eil and Rao ( 2011 ) dubbed this the “good news-bad news effect.” For example, Eil and Rao asked subjects to judge the rank of their IQ and physical attractiveness and then received feedback (a pairwise comparison with a randomly selected subject in the same experiment). While subjects conformed to Bayesian updating when they received positive feedback (i.e., when their rank was better than the comparand), they systematically discounted the negative feedback. Similar results have been found using a variety of feedback types (Sharot et al., 2011b ; Korn et al., 2012 ; Lefebvre et al., 2017 ).

One reason people may discount negative feedback is that they wish to blunt its “sting” (Eil & Rao, 2011 ; Köszegi, 2006 ). Consistent with this account, Eil and Rao found that subjects who believed that their ranks were near the bottom of the distribution were willing to pay to avoid learning their true rank. An alternative account, drawing from our Bayesian analysis of auxiliary hypotheses, is that people are being fully Bayesian, but their internal model is different from the one presupposed by Eil and Rao. Specifically, let h denote the hypothesis that a person is “high rank,” and let a denote the auxiliary hypothesis that the feedback is “valid” (i.e., from an unbiased source). It is intuitive that subjects might discount negative feedback by positing invalid evidence sources; for example, if a person judges you to be unattractive, you could discount this feedback by positing that this person is exceptionally harsh (judges everyone to be unattractive) or is having a bad day.

Suppose we have two people who have the same prior on validity, P ( a | h ), but different priors on their rank, P ( h ). The Bayesian analysis developed above (see Fig.  1 ) predicts that the person who assigns higher prior probability to being high rank will update less in response to negative feedback. Consistent with this prediction, individuals with higher dispositional optimism were more likely to maintain positive expectations after experiencing losses in a gambling task (Gibson & Sanbonmatsu, 2004 ). The Bayesian analysis also predicts that two people with different priors on validity but the same priors on rank will exhibit different patterns of asymmetric updating, with the weaker prior on validity leading to greater discounting of negative feedback. In support of this prediction, Gilovich and colleagues (Gilovich, 1983 ; Gilovich & Douglas, 1986 ) found that people who observed an outcome that appeared to have arisen from a statistical “fluke” were more likely to discount this outcome when it was negative, presumably since the feedback was perceived to be invalid. The same kind of discounting can lead to overconfidence in financial markets, where investors are learning about their abilities; by taking too much credit for their gains and not enough for their losses, they become overconfident (Gervais & Odean, 2001 ).

A related phenomenon arises in studies of cheating and lying (see Gino, Norton, and Weber ( 2016 ) for a review). When people obtain a favorable outcome through unscrupulous means, they tend to attribute this success to their personal ability. For example, Chance, Norton, Gino, and Ariely ( 2011 ) administered an IQ test to participants that included an answer key at the bottom so that they could optionally “check their work.” Compared to participants who did not have the answer key, those with the answer key not only scored more highly, but also predicted (incorrectly) that they would score more highly on a subsequent test. One way to interpret this result is that participants had a strong prior belief in their ability, which led them to discard the auxiliary hypothesis that cheating aided their score, thereby inflating estimates of their own ability.

The Bayesian analysis predicts that changing auxiliary assumptions will systematically alter updating asymmetries in response to good and bad news. This prediction was recently tested using a two-armed bandit paradigm in which subjects played the role of prospectors choosing between two mines in search of gold (Dorfman, Bhui, Hughes, & Gershman, 2018 ). Each mine was associated with a fixed probability of yielding gold or rocks. In the “benevolent” condition, the subjects were told that a tycoon would intervene on a proportion of trials, replacing the contents of the mine with gold. Importantly, subjects were not told when the tycoon was intervening; they therefore had to infer whether a reward was the result of the intervention or reflected the true underlying reward probability. Because the tycoon would never replace the gold with rocks (the negative outcome), observing rocks was strictly more informative about the underlying reward probability. Subjects in this case were expected to show a pessimism bias, learning more from negative outcomes than from positive outcomes. In contrast, they were expected to show an optimism bias (learning more from positive outcomes than from negative outcomes) in an “adversarial” condition, where a “bandit” replaced the contents of the mine with rocks on a proportion of trials. Computational modeling of the choice data revealed an overall optimism bias, perhaps reflecting the dispositional factors discussed above, but also showed that subjects altered their bias across conditions in accordance with the Bayesian analysis, learning more from negative outcomes in the benevolent condition compared to the adversarial condition.

Controllability

In settings where people might have some control over their observations, beliefs about rank or personal ability are closely connected to beliefs about controllability (Huys & Dayan, 2009 ). If a utility-maximizing agent believes that the world is controllable, then it is reasonable to assume that positive outcomes are more likely than negative outcomes, and hence negative outcomes are more likely to be explained away by alternative auxiliary hypotheses. For example, if you believe that you are a good test-taker (i.e., you have some control over test outcomes), then you may attribute poor test performance to the test difficulty rather than revising your beliefs about your own proficiency; this attribution is less plausible if you believe that you are a bad test-taker (i.e., you lack control over test outcomes). Thus, controllability is an important auxiliary hypothesis for interpreting feedback, with high perceived controllability leading to optimistic beliefs (Harris, 1996 ; Weinstein, 1980 ). The link between controllability and rank can be accommodated within the Bayesian framework, since we model the conditional distribution of the auxiliary hypothesis (controllability) given the central hypothesis (rank). This link is supported by studies showing that mood induction can bring about changes in beliefs about controllability Alloy, Abramson, and Viscusi ( 1981 ).

This analysis of controllability might provide insight into the psychopathology of asymmetric updating in response to positive and negative feedback. Individuals with depression do not show an optimism bias (so-called “depressive realism” Moore and Fresco, 2012 ), and Korn, Sharot, Walter, Heekeren, and Dolan ( 2014 ) demonstrated that this may arise from symmetric (unbiased) updating. One possibility is that this occurs because individuals with depression believe that the world is relatively uncontrollable—the key idea in the “learned helplessness” theory of depression (Seligman, 1975 ; Huys & Dayan, 2009 ; Abramson et al., 1978 ), which implies that they cannot take credit for positive outcomes any more than they can discount negative outcomes. Another possibility is that individuals with depression have a lower prior on rank, which would also lead to more symmetric updating compared to non-depressed individuals.

When placed in objectively uncontrollable situations, people will nonetheless perceive that they have control (Langer, 1975 ). According to the Bayesian analysis, this can arise when it is possible to discount unexpected outcomes in terms of an auxiliary hypothesis (e.g., fluke events, intrinsic variability, interventions by alternative causes) instead of reducing belief in control. As pointed out by Harris and Osman ( 2012 ), illusions of control typically arise in situations where cues indicate that controllability is plausible. For example, Langer ( 1975 ) showed that cues suggesting that one’s opponent is incompetent inflate the illusion of control in a competitive setting, possibly by increasing the probability that the poor performance of the opponent is due to incompetence rather than the random nature of the outcomes. Another study showed that giving subjects an action that was in fact disconnected from the sequence of outcomes nonetheless inflated their perception that the sequence was controllable Ladouceur and Sévigny ( 2005 ). More generally, active involvement increases the illusion of control, as measured by the propensity for risk-taking: Davis, Sundahl, and Lesbo ( 2000 ) found that gamblers in real-world casinos placed higher bets on their own dice rolls than on others’ dice rolls (see also Gilovich and Douglas, 1986 ; Fernandez-Duque and Wifall, 2007 ).

The basic lesson from all of these studies is that beliefs about controllability and rank can insulate an individual from the disconfirming effects of negative feedback. This response to negative feedback is rational under the assumption that alternative auxiliary hypotheses (e.g., statistical flukes) can absorb the blame.

The true self

Beliefs about the self provide a particularly powerful example of resistance to disconfirmation. People make a distinction between a “superficial” self and a “true” self, and these selves are associated with distinct patterns of behavior (Strohminger, Knobe, & Newman, 2017 ). In particular, people hold a strong prior belief that the true self is good (the central hypothesis h in our terminology). This proposition is supported by several lines of evidence. First, positive, desirable personality traits are viewed as more essential to the true self than negative, undesirable traits (Haslam, Bastian, & Bissett, 2004 ). Second, people feel that they know someone most deeply when given positive information about them Christy et al., ( 2017 ). Third, negative changes in traits are perceived as more disruptive to the true self than positive changes (Molouki and Bartels, 2017 ; De Freitas et al., 2017 ).

The key question for our purposes is what happens when one observes bad behavior: do people revise their belief in the goodness of the actor’s true self? The answer is largely no. Bad behavior is attributed to the superficial self, whereas good behavior is attributed to the true self (Newman, Bloom, & Knobe, 2014 ). This tendency is true even of individuals who generally have a negative attitude toward others, such as misanthropes and pessimists (De Freitas et al., 2016 ). And even if people are told explicitly that an actor’s true self is bad, they are still reluctant to see the actor as truly bad (Newman, De Freitas, & Knobe, 2015 ). Conversely, observing positive changes in behavior (e.g., becoming an involved father after being a deadbeat) are perceived as indicating “self-discovery” (Bench et al., 2015 ; De Freitas et al., 2017 ).

These findings support the view that belief in the true good self shapes the perception of evidence about other individuals: evidence that disconfirms this belief tends to be discounted. The Bayesian framework suggests that this may occur because people infer alternative auxiliary hypotheses, such as situational factors that sever the link between the true self and observed behavior (e.g., he behaved badly because is mother just died). However, this possibility remains to be studied directly.

Stereotype updating

Stereotypes exert a powerful influence on our thinking about other people, but where do they come from? We are not born with strong beliefs about race, ethnicity, gender, religion, or sexual orientation; these beliefs must be learned from experience. What is remarkable is the degree to which stereotypes, once formed, are stubbornly resistant to updating (see Hewstone ( 1994 ) for a review). As Lippmann ( 1922 ) remarked, “There is nothing so obdurate to education or criticism as the stereotype.”

One possible explanation is that stereotypes are immunized from disconfirmation by flexible auxiliary hypotheses. This explanation fits well with the observation that individuals whose traits are inconsistent with a stereotype are segregated into “subtypes” without diluting the stereotype (Weber and Crocker, 1983 ; Hewstone, 1994 ; Johnston & Hewstone, 1992 ). For example, Weber and Crocker ( 1983 ) found that stereotypes were updated more when inconsistent traits were dispersed across multiple individuals rather than concentrated in a few individuals, consistent with the idea that concentration of inconsistencies licenses the auxiliary hypothesis that the individuals are outliers, and therefore do not reflect upon the group as a whole. An explicit sorting task supported this conclusion: inconsistent individuals tended to be sorted into separate groups (see also Johnston and Hewstone, 1992 ).

These findings have been simulated by a recurrent connectionist model of stereotype judgment (Van Rooy, Van Overwalle, Vanhoomissen, Labiouse, & French, 2003 ). The key mechanism underlying subtyping is the competition between “group” units and “individual” units, such that stereotype-inconsistent information will be captured by individual units, provided the inconsistencies are concentrated in specific individuals. When the inconsistencies are dispersed, the group units take responsibility for them, updating the group stereotype accordingly. Another finding, also supported by connectionist modeling (Queller & Smith, 2002 ), is that individuals with moderate inconsistencies cause more updating than individuals with extreme inconsistencies. The logic is once again that extreme inconsistencies cause the individual to be segregated from the group stereotype.

Extinction learning

Like stereotypes, associative memories—in particular fear memories—are difficult to extinguish once formed. For example, in a typical fear conditioning paradigm, a rat is exposed to repeated tone-shock pairings; after only a few pairings, the rat will reliably freeze in response to the tone, indicating its anticipation of an upcoming shock. It may take dozens of tone-alone pairings to return the animal to its pre-conditioning response to the tone, indicating that extinction is much slower than acquisition. Importantly, the fact that the rat has returned to baseline does not mean that it has unlearned the fear memory. Under appropriate conditions, the rat’s fear memory will return (Bouton, 2004 ). For example, simply waiting a month before testing the rat’s response to the tone is sufficient to reveal the dormant fear, a phenomenon known as spontaneous recovery (Rescorla, 2004 ).

As with stereotype updating, one possibility is that conditioned fear is resistant to inconsistent information presented during extinction because the extinction trials are regarded as outliers or possibly subtypes. Thus, although fear can be temporarily reduced during extinction, it is not erased because the subtyping process effectively immunizes the fear memory from disconfirmation. In support of this view, there are suggestive parallels with stereotype updating. Analogous to the dispersed inconsistency conditions studied by Weber and Crocker ( 1983 ) and Johnston and Hewstone ( 1992 ), performing extinction in multiple contexts reduces the return of fear (Chelonis et al., 1999 ; Gunther et al., 1998 ). Analogous to the moderate versus extreme inconsistency manipulation (Queller & Smith, 2002 ), gradually reducing the frequency of tone-shock pairs during extinction prevents the return of fear (Gershman, Jones, Norman, Monfils, & Niv, 2013 ), possibly by titrating the size of the error signal driving memory updating (see also Gershman et al., 2014 ). More generally, it has been argued that effective memory updating procedures must control the magnitude of inconsistency between observations and the memory-based expectation, in order to prevent new memories from being formed to accommodate the inconsistent information (Gershman, Monfils, Norman, & Niv, 2017 ).

Conspiracy theories

As defined by Sunstein and Vermeule ( 2009 ), a conspiracy theory is “an effort to explain some event or practice by reference to the machinations of powerful people, who attempt to conceal their role (at least until their aims are accomplished)” (p. 205). Conspiracy theories are interesting from the perspective of auxiliary hypotheses because they often require a spiraling proliferation of auxiliaries to stay afloat. Each tenuous hypothesis needs an additional tenuous hypothesis to lend it plausibility, which in turn needs more tenuous hypotheses, until the theory embraces an enormous explanatory scope. For example, people who believe that the Holocaust was a hoax need to explain why the population of European Jews declined by 6 million during World War II; if they claim that the Jews immigrated to Israel and other countries, then they need to explain the discrepancy with immigration statistics, and if they claim that these statistics are false, then they need to explain why they were falsified, and so on.

Because conspiracy theories tend to have an elaborate support structure of auxiliary hypotheses, disconfirming evidence can be effectively explained away, commonly by undermining the validity of the evidence source. As Sunstein and Vermeule ( 2009 ) put it:

Conspiracy theories often attribute extraordinary powers to certain agents—to plan, to control others, to maintain secrets, and so forth. Those who believe that those agents have such powers are especially unlikely to give respectful attention to debunkers, who may, after all, be agents or dupes of those who are responsible for the conspiracy in the first instance… The most direct governmental technique for dispelling false (and also harmful) beliefs—providing credible public information—does not work, in any straightforward way, for conspiracy theories. This extra resistance to correction through simple techniques is what makes conspiracy theories distinctively worrisome. (p. 207)

This description conforms to the Bayesian theory’s prediction that a sparse, deterministic set of ad hoc auxiliary hypotheses can serve to explain away disconfirming data. In particular, conspiracy theorists use a large set of auxiliary hypotheses that perfectly (i.e., deterministically) predict the observed data and only the observed data (sparsity). This “drive for sense-making” (Chater & Loewenstein, 2016 ) is rational if the predictive power of a conspiracy theory outweighs the penalty for theory complexity—the Bayesian “Occam’s razor” (MacKay, 2003 ).

Some evidence suggests that the tendency to endorse conspiracy theories is a personality trait or cognitive style: people who endorse one conspiracy theory tend to also endorse other conspiracy theories Lewandowsky, Oberauer, and Gignac ( 2013 )and Goertzel ( 1994 ). One possibility is that this reflects parametric differences in probabilistic assumptions across individuals, such that people with very sparse and deterministic priors will be more likely to find conspiracy theories plausible.

Religious belief

While conspiracy theories are promulgated by relatively small groups of people, religious beliefs are shared by massive groups of people. Yet most of these people have little or no direct evidence for God: few have witnessed a miracle, spoken to God, or wrestled with an angel in their dreams. In fact, considerable evidence, at least on the surface, argues against belief in God, such as the existence of evil and the historical inaccuracy of the Bible.

One of the fundamental problems in the philosophy of religion is to understand the epistemological basis for religious beliefs—are they justified (Swinburne, 2004 ), or are they fictions created by psychological biases and cultural practices (Boyer, 2003 )? Central to this debate is the status of evidence for the existence of God, such as reports of miracles. Following Hume ( 1748 ), a miracle is conventionally defined as “a transgression of a law of nature by a particular volition of the Deity”(p. 173). Hume famously argued that the evidence for miracles will always be outweighed by the evidence against them, since miracles are one-time transgressions of “natural” laws that have been established on the basis of countless observations. It would require unshakeable faith in the testimony of witnesses to believe in miracles, whereas in fact (Hume argues) testimony typically originates among uneducated, ignorant people.

As a number of philosophers (e.g., Earman, 2000 ; Swinburne, 1970 ) have pointed out, Hume’s argument is weakened when one considers miracles through the lens of probability. Even if the reliability of individual witnesses was low, a sufficiently large number of such witnesses should be providing strong evidence for a miracle. Likewise, our beliefs about natural laws are based on a finite amount of evidence, possibly from sources of varying reliability, and hence are subject to the same probabilistic considerations. Whether or not the probabilistic analysis supports the existence of God depends on the amount and quality of evidence (both from experiment and hearsay) relative to the prior. Indeed, the same analysis has been used to deny the existence of God Howson ( 2011 ).

The probabilistic analysis of miracles provides another example of auxiliary hypotheses in action. The evidential impact of alleged miracles depends on auxiliary hypotheses about the reliability of testimony. If one is a religious believer, one can discount the debunking of miracles by questioning the evidence for natural laws. For example, some creationists argue that the fossil record is fake. Conversely, a non-believer can discount the evidence for miracles by questioning the eyewitness testimony, as Hume did. One retort to this view is that symmetry is misleading: the reliability of scientific evidence is much stronger than the historical testimony (e.g., Biblical sources). However, if one has a strong a priori belief in an omnipotent and frequently inscrutable God, then it may appear more plausible that apparent disconfirmations are simply examples of this inscrutability. In other words, if one believes in intelligent design, then scientific evidence that contradicts religious sources may be interpreted as evidence for our ignorance of the true design. Footnote 12

Conceptual change in childhood

Children undergo dramatic restructuring of their knowledge during development, inspiring analogies with conceptual change in science (Carey, 2009 ; Gopnik, 2012 ). According to this “child-as-scientist” analogy, children engage in many of the same epistemic practices as scientists: probabilistically weighing evidence for different theories, balancing simplicity and fit, inferring causal relationships, carrying out experiments. If this analogy holds, then we should expect to see signs of resistance to disconfirmation early in development. In particular, Gopnik and Wellman ( 1992 ) have argued that children form ad hoc auxiliary hypotheses to reason about anomalous data until they can discover more coherent alternative theories.

For example, upon being told that the earth is round, some children preserve their preinstructional belief that the earth is flat by inferring that the earth is disk-shaped (Vosniadou & Brewer, 1992 ). After being shown two blocks of different weights hitting the ground at the same time when dropped from the same height, some middle-school students inferred that they hit the ground at different times but the difference was too small to observe, or that the blocks were in fact (contrary to the teacher’s claims) the same weight (Champagne et al., 1985 ). Children who hold a geometric-center theory of balancing believe that blocks must be balanced in the middle; when faced with the failure of this theory applied to uneven blocks, children declare that the uneven blocks are impossible to balance (Karmiloff-Smith & Inhelder, 1975 ).

Experimental work by Schulz, Goodman, Tenenbaum, and Jenkins ( 2008 ) has illuminated the role played by auxiliary hypotheses in children’s causal learning. In these experiments, children viewed contact interactions between various blocks, resulting in particular outcomes (e.g., a train noise or a siren noise). Children then made inferences about novel blocks based on ambiguous evidence. The data suggest that children infer abstract laws that describe causal relations between classes of blocks (see also Schulz and Sommerville, 2006 ; Saxe et al., 2005 ). Schulz and colleagues argue for a connection between the rapid learning abilities of children (supported by abstract causal theories) and resistance to disconfirmation: the explanatory scope of abstract causal laws confer a strong inductive bias that enables learning from small amounts of data, and this same inductive bias confers robustness in the face of anomalous data by assigning responsibility to auxiliary hypotheses (e.g., hidden causes). A single anomaly will typically be insufficient to disconfirm an abstract causal theory that explains a wide range of data.

The use of auxiliary hypotheses has important implications for education. In their discussion of the educational literature, Chinn and Brewer ( 1993 ) point out that anomalous data are often used in the classroom to spur conceptual change, yet “the use of anomalous data is no panacea. Science students frequently react to anomalous data by discounting the data in some way, thus preserving their preinstructional theories” (p. 2). They provide examples of children employing a variety of discounting strategies, such as ignoring anomalous data, excluding it from the domain of the theory, holding it in abeyance (promising to deal with it later), and reinterpreting it. Careful attention to these strategies leads to pedagogical approaches that more effectively produce theory change. For example, Chinn and Brewer recommend helping children construct necessary background knowledge before introduction of the anomalous data, combined with the presentation of an intelligible and plausible alternative theory. In addition, bolstering the credibility of the anomalous data, avoiding ambiguities, and using multiple lines of evidence can be effective at producing theory change.

Is the Bayesian analysis falsifiable?

The previous sections have illustrated the impressive scope of the Bayesian analysis, but is it too impressive? Could it explain anything if we’re creative enough at devising priors and auxiliaries that conform to the model’s predictions? In other words, are Bayesians falling victim to their own Duhem–Quine thesis? Some psychologists say yes—that the success or failure of Bayesian models of cognition hinges on ad hoc choices of priors and likelihoods that conveniently fit the data (Marcus & Davis, 2013 ; Bowers & Davis, 2012 ).

It is true that Bayesian models can be abused in this way, and perhaps sometimes are. Nonetheless, Bayesian models are falsifiable, because their key predictions are not particular beliefs but particular regularities in belief updating. If I can independently measure (or experimentally impose) your prior and likelihood, then Bayes’ rule dictates one and only one posterior. If this posterior does not conform to Bayes’ rule, then the model has been falsified. Many tests of this sort of have been carried out, with the typical result (e.g., Evans et al., 2002 ) being that posterior judgments utilize both the prior and the likelihood, but do not precisely follow Bayes’ rule (in some cases relying too much on the prior, and in other cases relying too much on the likelihood). The point here is not to establish whether people carry out exact Bayesian inference (they almost surely do not; see Dasgupta et al., 2017 ), but rather to show that they are not completely arbitrary.

As this article has emphasized, theories consist of multiple hypotheses (some central, some auxiliary) that work in concert to produce observations. Falsification of theories rests upon isolation and evaluation of these individual components; the theory as a whole cannot be directly falsified (Quine, 1951 ). The same is true for the Bayesian analysis of auxiliary hypotheses. In order to test this account, we would first need to independently establish the hypothesis space, the likelihood, and the prior. A systematic study of this sort has yet to be undertaken.

Conclusions

No one likes being wrong, but most of us believe that we can be wrong—that we would revise our beliefs when confronted with compelling disconfirmatory evidence. We conventionally think of our priors as inductive biases that may eventually be relinquished as we observe more data. However, priors also color our interpretation of data, determining how their evidential impact should be distributed across the web of beliefs. Certain kinds of probabilistic assumptions about the world lead one’s beliefs (under perfect rationality) to be remarkably resistant to disconfirmation, in some cases even transmuting disconfirmation into confirmation. This should not be interpreted as an argument that people are perfectly rational, only that many aspects of their behavior that seem irrational on the surface may in fact be compatible with rationality when understood in terms of reasoning about auxiliary hypotheses.

An important implication is that if we want to change the beliefs of others, we need to attend to the structure of their belief systems rather than (or in addition to) the errors in their belief updating mechanisms. Rhetorical tactics such as exposing hypocrisies, staging interventions, declaiming righteous truths, and launching informational assaults against another person’s central hypotheses are all doomed to be relatively ineffective from the point of view articulated here. To effectively persuade, one must incrementally chip away at the “protective belt” of auxiliary hypotheses until the central hypothesis can be wrested loose. The inherent laboriousness of this tactic may be why social and scientific progress is so slow, even with the most expert of persuasion artists.

As a caveat, we should keep in mind that whether a particular intuitive theory satisfies these properties will naturally vary across domains and an individual’s experience.

Strevens ( 2001 ) notes that this expression does not hold if the data affect the posterior in ways other than falsifying the conjunct ha , although such scenarios are probably rare.

We can think of this conditional prior as specifying strength of belief in an auxiliary given that one already believes a particular central hypothesis. In other words, it assumes that different central hypotheses invoke different distributions over auxiliaries. This seems intuitive insofar as auxiliaries will tend to be highly theory-specific (you don’t hypothesize auxiliaries about baseball when contemplating cosmology).

This is not to deny that some forms of motivated reasoning exist, but only to assert particular ways in which robustness to disconfirmation arises from rational inference.

Note that there are a number of reasons why people might generate sparse hypotheses besides having a sparse prior, such as computational limits (cf. Dasgupta et al., 2017 ).

Some evidence suggests that people can adaptively determine which causal theory (deterministic or probabilistic) is most suitable for a given domain (Griffiths & Tenenbaum, 2009 ).

Yeung and Griffiths ( 2015 ) presented empirical evidence favoring a preference for (near) determinism but not sparsity, though other experiments have suggested that both sparsity and determinism are required to explain human causal inferences (Powell, Merrick, Lu, & Holyoak, 2016 ).

Indeed, there has been a vigorous debate in psychology about the validity of Bayesian rationality as a model of human cognition (e.g., Jones and Love, 2011 ). Here I am merely asking the reader to consider the conditional claim that if people are Bayesian with sparse and deterministic intuitive theories, then they would exhibit robustness to disconfirmation.

It is important to distinguish this view from the stronger thesis that no theory-neutral stage of perceptual analysis exists (e.g., Churchland, 1979 ). As pointed out by Fodor ( 1984 ), we can accept that the semantic interpretation of percepts is theory-dependent without abandoning the possibility that there are some cognitively impenetrable aspects of perception.

How can this debunking strategy succeed when theorists can produce new auxiliary hypotheses ad infinitum ? The Bayesian analysis makes provision for this: new auxiliaries will only be considered if they have appreciable probability, P ( a | h ), relative to the prior, P ( h ).

The generality of this effect has been the subject of controversy, with some authors (Shah, Harris, Bird, Catmur, and Hahn ( 2016 )) finding no evidence for an optimism bias. However, these null results have themselves been controversial: correcting confounds in the methodology (Garrett & Sharot, 2017 ), and using model-based estimation techniques (Kuzmanovic & Rigoux, 2017 ), have indicated a robust optimism bias.

This point is closely related to the position known as skeptical theism (McBrayer, 2010 ), which argues that our inability to apprehend God’s reasons for certain events (e.g., evil) does not justify the claim that no such reasons exist. This position undercuts inductive arguments against the existence of God that rely on the premise that no reasons exist for certain events.

Abramson, L. Y., Seligman, M. E., & Teasdale, J. D. (1978). Learned helplessness in humans: Critique and reformulation. Journal of Abnormal Psychology , 87 , 49–74.

Article   PubMed   Google Scholar  

Alloy, L. B., Abramson, L. Y., & Viscusi, D. (1981). Induced mood and the illusion of control. Journal of Personality and Social Psychology , 41 , 1129–1140.

Article   Google Scholar  

Austerweil, J. L., & Griffiths, T. L. (2011). Seeking confirmation is rational for deterministic hypotheses. Cognitive Science , 35 , 499–526.

Austerweil, J. L., & Griffiths, T. L. (2013). A nonparametric Bayesian framework for constructing flexible feature representations. Psychological Review , 120 , 817–851.

Bench, S. W., Schlegel, R. J., Davis, W. E., & Vess, M. (2015). Thinking about change in the self and others: The role of self-discovery metaphors and the true self. Social Cognition , 33 , 169–185.

Biederman, I. (1987). Recognition-by-components: a theory of human image understanding. Psychological Review , 94 , 115–147.

Bouton, M. E. (2004). Context and behavioral processes in extinction. Learning & Memory , 11 , 485–494.

Bovens, L., & Hartmann, S. (2002). Bayesian networks and the problem of unreliable instruments. Philosophy of Science , 69 , 29–72.

Bowers, J. S., & Davis, C. J. (2012). Bayesian just-so stories in psychology and neuroscience. Psychological Bulletin , 138 , 389–414.

Boyer, P. (2003). Religious thought and behaviour as by-products of brain function. Trends in Cognitive Sciences , 7 , 119–124.

Buchanan, D. W., & Sobel, D. M. (2011). Children posit hidden causes to explain causal variability. In Proceedings of the 33rd annual conference of the cognitive science society .

Buchanan, D. W., Tenenbaum, J. B., & Sobel, D. M. (2010). Edge replacement and nonindependence in causation. In Proceedings of the 32nd annual conference of the cognitive science society , (919–924).

Carey, S. (2009) The Origin of Concepts . Oxford: Oxford University Press.

Book   Google Scholar  

Champagne, A., Gunstone, R. F., & Klopfer, L. E. (1985). Instructional consequences of studentsÕ knowledge about physical phenomena. In L. West, & A. Pines (Eds.) Cognitive structure and conceptual change (pp. 61–68): Academic Press.

Chance, Z., Norton, M. I., Gino, F., & Ariely, D. (2011). Temporal view of the costs and benefits of self-deception. Proceedings of the National Academy of Sciences , 108 , 15655–15659.

Chater, N., & Loewenstein, G. (2016). The under-appreciated drive for sense-making. Journal of Economic Behavior & Organization , 126 , 137–154.

Chelonis, J. J., Calton, J. L., Hart, J. A., & Schachtman, T. R. (1999). Attenuation of the renewal effect by extinction in multiple contexts. Learning and Motivation , 30 , 1–14.

Chinn, C. A., & Brewer, W. F. (1993). The role of anomalous data in knowledge acquisition: a theoretical framework and implications for science instruction. Review of Educational Research , 63 , 1–49.

Christy, A. G., Kim, J., Vess, M., Schlegel, R. J., & Hicks, J. A. (2017). The reciprocal relationship between perceptions of moral goodness and knowledge of othersÕ true selves. Social Psychological and Personality Science .

Churchland, P. M. (1979) Scientific realism and the plasticity of mind . Cambridge: Cambridge University Press.

Cook, J., & Lewandowsky, S. (2016). Rational irrationality: Modeling climate change belief polarization using Bayesian networks. Topics in Cognitive Science , 8 , 160–179.

Dasgupta, I., Schulz, E., & Gershman, S. J. (2017). Where do hypotheses come from? Cognitive Psychology , 96 , 1–25.

Davis, D., Sundahl, I., & Lesbo, M. (2000). Illusory personal control as a determinant of bet size and type in casino craps games. Journal of Applied Social Psychology , 30 , 1224–1242.

De Freitas, J., Sarkissian, H., Newman, G., Grossmann, I., De Brigard, F., Luco, A., & Knobe, J. (2016). Consistent belief in a good true self in misanthropes and three interdependent cultures. Unpublished manuscript .

De Freitas, J., Tobia, K. P., Newman, G. E., & Knobe, J. (2017). Normative judgments and individual essence. Cognitive Science , 41 , 382–402.

Dorfman, H., Bhui, R., Hughes, B., & Gershman, S. (2018). Causal inference about good and bad news. Unpublished .

Dorling, J. (1979). Bayesian personalism, the methodology of scientific research programmes, and Duhem’s problem. Studies in History and Philosophy of Science Part A , 10 , 177–187.

Duhem, P. M. (1954) The Aim and Structure of Physical Theory . Princeton: Princeton University Press.

Earman, J. (1992) Bayes or Bust? A Critical Examination of Bayesian Confirmation Theory . Cambridge: MIT Press.

Google Scholar  

Earman, J. (2000) Hume’s Abject Failure: The Argument Against Miracles . Oxford: Oxford University Press.

Eil, D., & Rao, J. M. (2011). The good news-bad news effect: asymmetric processing of objective information about yourself. American Economic Journal: Microeconomics , 3 , 114–138.

Evans, J. S. B., Handley, S. J., Over, D. E., & Perham, N. (2002). Background beliefs in Bayesian inference. Memory & Cognition , 30 , 179–190.

Fernandez-Duque, D., & Wifall, T. (2007). Actor/observer asymmetry in risky decision making. Judgment and Decision Making , 2 , 1.

Feyerabend, P. (1975). Against Method . Verso.

Fitelson, B., & Waterman, A. (2005). Bayesian confirmation and auxiliary hypotheses revisited: A reply to Strevens. The British Journal for the Philosophy of Science , 56 , 293–302.

Fodor, J. (1984). Observation reconsidered. Philosophy of Science , 51 , 23–43.

Frosch, C. A., & Johnson-Laird, P. N. (2011). Is everyday causation deterministic or probabilistic? Acta Psychologica , 137 , 280–291.

Garrett, N., & Sharot, T. (2017). Optimistic update bias holds firm: Three tests of robustness following shah others. Consciousness and Cognition , 50 , 12–22.

Article   PubMed   PubMed Central   Google Scholar  

Gershman, S. J., & Blei, D. M. (2012). A tutorial on Bayesian nonparametric models. Journal of Mathematical Psychology , 56 , 1–12.

Gershman, S. J., Jones, C. E., Norman, K. A., Monfils, M. -H., & Niv, Y. (2013). Gradual extinction prevents the return of fear: implications for the discovery of state. Frontiers in Behavioral Neuroscience , 7 , 164.

Gershman, S. J., Radulescu, A., Norman, K. A., & Niv, Y. (2014). Statistical computations underlying the dynamics of memory updating. PLoS Computational Biology , 10 , e1003939.

Gershman, S. J., Monfils, M. -H., Norman, K. A., & Niv, Y. (2017). The computational nature of memory modification. eLife , 6 , e23763.

Gervais, S., & Odean, T. (2001). Learning to be overconfident. Review of Financial studies , 14 , 1–27.

Gibson, B., & Sanbonmatsu, D. M. (2004). Optimism, pessimism, and gambling: The downside of optimism. Personality and Social Psychology Bulletin , 30 , 149–160.

Gilovich, T. (1983). Biased evaluation and persistence in gambling. Journal of Personality and Social Psychology , 44 , 1110–1126.

Gilovich, T. (1991) How We Know What Isn’t So . New York City: Simon and Schuster.

Gilovich, T., & Douglas, C. (1986). Biased evaluations of randomly determined gambling outcomes. Journal of Experimental Social Psychology , 22 , 228–241.

Gino, F., Norton, M. I., & Weber, R. A. (2016). Motivated Bayesians: Feeling moral while acting egoistically. The Journal of Economic Perspectives , 30 , 189–212.

Goertzel, T. (1994). Belief in conspiracy theories. Political Psychology , 15 , 731–742.

Gopnik, A. (2012). Scientific thinking in young children: Theoretical advances, empirical research, and policy implications. Science , 337 , 1623–1627.

Gopnik, A., & Wellman, H. M. (1992). Why the child’s theory of mind really is a theory. Mind & Language , 7 , 145–171.

Gregory, R. L. (1970) The Intelligent Eye . London: Weidenfeld and Nicolson.

Griffiths, T. L., & Tenenbaum, J. B. (2009). Theory-based causal induction. Psychological Review , 116 , 661–716.

Grünbaum, A. (1962). The falsifiability of theories: total or partial? a contemporary evaluation of the Duhem-Quine thesis. Synthese , 14 , 17–34.

Gunther, L. M., Denniston, J. C., & Miller, R. R. (1998). Conducting exposure treatment in multiple contexts can prevent relapse. Behaviour Research and Therapy , 36 , 75–91.

Harding, S. (1976) Can theories be refuted? Essays on the Duhem–Quine thesis . Dordrecht: D. Reidel Publishing Company.

Harris, P. (1996). Sufficient grounds for optimism?: The relationship between perceived controllability and optimistic bias. Journal of Social and Clinical Psychology , 15 , 9–52.

Harris, A. J., & Osman, M. (2012). The illusion of control: a Bayesian perspective. Synthese , 189 , 29–38.

Haslam, N., Bastian, B., & Bissett, M. (2004). Essentialist beliefs about personality and their implications. Personality and Social Psychology Bulletin , 30 , 1661–1673.

Helmholtz, H. v. (1867). Handbuch der physiologischen Optik . Voss.

Hempel, C. G. (1966) Philosophy of Natural Science . Upper Saddle River: Prentice-Hall.

Hendrickson, A. T., Navarro, D. J., & Perfors, A. (2016). Sensitivity to hypothesis size during information search. Decision , 3 , 62–80.

Hewstone, M. (1994). Revision and change of stereotypic beliefs: in search of the elusive subtyping model. European Review of Social Psychology , 5 , 69–109.

Howson, C. (2011) Objecting to God . Cambridge: Cambridge University Press.

Howson, C., & Urbach, P. (2006) Scientific reasoning: the Bayesian approach . Chicago: Open Court Publishing.

Hromádka, T., DeWeese, M. R., & Zador, A. M. (2008). Sparse representation of sounds in the unanesthetized auditory cortex. PLoS Biology , 6 , e16.

Hume, D. (1748). An Enquiry Concerning Human Understanding .

Huys, Q. J., & Dayan, P. (2009). A Bayesian formulation of behavioral control. Cognition , 113 , 314–328.

Jaynes, E. T. (2003) Probability Rheory: The Logic of Science . Cambridge: Cambridge University Press.

Jern, A., Chang, K. -M. K., & Kemp, C. (2014). Belief polarization is not always irrational. Psychological Review , 121 , 206–224.

Johnston, L., & Hewstone, M. (1992). Cognitive models of stereotype change: 3. subtyping and the perceived typicality of disconfirming group members. Journal of Experimental Social Psychology , 28 , 360–386.

Jones, M., & Love, B. C. (2011). Bayesian fundamentalism or enlightenment? on the explanatory status and theoretical contributions of Bayesian models of cognition. Behavioral and Brain Sciences , 34 , 169–188.

Karmiloff-Smith, A., & Inhelder, B. (1975). If you want to get ahead, get a theory. Cognition , 3 , 195–212.

Klayman, J., & Ha, Y. -W. (1987). Confirmation, disconfirmation, and information in hypothesis testing. Psychological Review , 94 , 211–228.

Knill, D. C., & Richards, W. (1996) Perception as Bayesian inference . Cambridge: Cambridge University Press.

Koehler, J. J. (1993). The influence of prior beliefs on scientific judgments of evidence quality. Organizational Behavior and Human Decision Processes , 56 , 28–55.

Korn, C. W., Prehn, K., Park, S. Q., Walter, H., & Heekeren, H. R. (2012). Positively biased processing of self-relevant social feedback. Journal of Neuroscience , 32 , 16832–16844.

Korn, C., Sharot, T., Walter, H., Heekeren, H., & Dolan, R. (2014). Depression is related to an absence of optimistically biased belief updating about future life events. Psychological Medicine , 44 , 579–592.

Köszegi, B. (2006). Ego utility, overconfidence, and task choice. Journal of the European Economic Association , 4 , 673–707.

Kuhn, T. S. (1962) The Structure of Scientific Revolutions . Chicago: University of Chicago Press.

Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin , 108 , 480–498.

Kuzmanovic, B., & Rigoux, L. (2017). Valence-dependent belief updating: Computational validation. Frontiers in Psychology , 8 , 1087.

Ladouceur, R., & Sévigny, S. (2005). Structural characteristics of video lotteries: Effects of a stopping device on illusion of control and gambling persistence. Journal of Gambling Studies , 21 , 117–131.

Lakatos, I. (1976). Falsification and the methodology of scientific research programmes. In Can Theories be Refuted? (pp. 205–259): Springer.

Langer, E. J. (1975). The illusion of control. Journal of Personality and Social Psychology , 32 , 311–328.

Laudan, L. (1990). Demystifying underdetermination. Minnesota studies in the philosophy of science , 14 (1990), 267–297.

Lefebvre, G., Lebreton, M., Meyniel, F., Bourgeois-Gironde, S., & Palminteri, S. (2017). Behavioural and neural characterization of optimistic reinforcement learning. Nature Human Behaviour , 1 , 0067.

Leiserowitz, A. A., Maibach, E. W., Roser-Renouf, C., Smith, N., & Dawson, E. (2013). Climategate, public opinion, and the loss of trust. American Behavioral Scientist , 57 , 818–837.

Lewandowsky, S., Oberauer, K., & Gignac, G. E. (2013). NASA faked the moon landingÑtherefore,(climate) science is a hoax: An anatomy of the motivated rejection of science. Psychological Science , 24 , 622–633.

Lippmann, W. (1922) Public opinion . Brace: Harcourt.

Lord, C. G., Ross, L., & Lepper, M. R. (1979). Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology , 37 , 2098–2109.

Lu, H., Yuille, A. L., Liljeholm, M., Cheng, P. W., & Holyoak, K. J. (2008). Bayesian generic priors for causal learning. Psychological Review , 115 , 955–984.

MacKay, D. J. (2003) Information Theory: Inference and Learning Algorithms . Cambridge: Cambridge university press.

Marcus, G. F., & Davis, E. (2013). How robust are probabilistic models of higher-level cognition? Psychological Science , 24 , 2351–2360.

Mayo, D. G. (1997). Duhem’s problem, the Bayesian way, and error statistics, or “what’s belief got to do with it?. Philosophy of Science , 64 , 222–244.

Mayrhofer, R., & Waldmann, M. R. (2015). Sufficiency and necessity assumptions in causal structure induction. Cognitive Science , 40 , 2137–2150.

McBrayer, J. P. (2010). Skeptical theism. Philosophy Compass , 5 , 611–623.

Mckenzie, C. R., & Mikkelsen, L. A. (2000). The psychological side of Hempel’s paradox of confirmation. Psychonomic Bulletin & Review , 7 , 360–366.

McKenzie, C. R., & Mikkelsen, L. A. (2007). A Bayesian view of covariation assessment. Cognitive Psychology , 54 , 33–61.

McKenzie, C. R., Ferreira, V. S., Mikkelsen, L. A., McDermott, K. J., & Skrable, R. P. (2001). Do conditional hypotheses target rare events? Organizational Behavior and Human Decision Processes , 85 , 291–309.

Molouki, S., & Bartels, D. M. (2017). Personal change and the continuity of the self. Cognitive Psychology , 93 , 1–17.

Moore, M. T., & Fresco, D. M. (2012). Depressive realism: a meta-analytic review. Clinical Psychology Review , 32 , 496–509.

Muentener, P., & Schulz, L. (2014). Toddlers infer unobserved causes for spontaneous events. Frontiers in Psychology, 23 , 5.

Navarro, D. J., & Perfors, A. F. (2011). Hypothesis generation, sparse categories, and the positive test strategy. Psychological Review , 118 , 120–134.

Newman, G. E., Bloom, P., & Knobe, J. (2014). Value judgments and the true self. Personality and Social Psychology Bulletin , 40 , 203–216.

Newman, G. E., De Freitas, J., & Knobe, J. (2015). Beliefs about the true self explain asymmetries based on moral judgment. Cognitive Science , 39 , 96–125.

Nisbett, R. E., & Ross, L. (1980) Human inference: Strategies and shortcomings of social judgment . Upper Saddle River: Prentice-Hall.

Oaksford, M., & Chater, N. (1994). A rational analysis of the selection task as optimal data selection. Psychological Review , 101 , 608–631.

Olshausen, B. A., & Field, D. J. (1996). Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature , 381 , 607–609.

Perfors, A., & Navarro, D. (2009). Confirmation bias is rational when hypotheses are sparse. Proceedings of the 31st Annual Conference of the Cognitive Science Society .

Poo, C., & Isaacson, J. S. (2009). Odor representations in olfactory cortex: “sparse” coding, global inhibition, and oscillations. Neuron , 62 , 850–861.

Popper, K. (1959) The Logic of Scientific Discovery . Manhattan: Harper & Row.

Powell, D., Merrick, M. A., Lu, H., & Holyoak, K. J. (2016). Causal competition based on generic priors. Cognitive Psychology , 86 , 62–86.

Queller, S., & Smith, E. R. (2002). Subtyping versus bookkeeping in stereotype learning and change: Connectionist simulations and empirical findings. Journal of Personality and Social Psychology , 82 , 300–313.

Quine, W. V. (1951). Two dogmas of empiricism. The Philosophical Review, 60 , 20–43.

Rescorla, R. A. (2004). Spontaneous recovery. Learning & Memory , 11 , 501–509.

Rock, I. (1983) The Logic of Perception . Cambridge: MIT Press.

Rosch, E. (1978). Principles of categorization. In Cognition and Categorization (pp. 27–48). Hillsdale: Lawrence Erlbaum Associates.

Saxe, R., Tenenbaum, J., & Carey, S. (2005). Secret agents inferences about hidden causes by 10-and 12-month-old infants. Psychological Science , 16 , 995–1001.

Schulz, L. E., & Sommerville, J. (2006). God does not play dice: Causal determinism and preschoolers’ causal inferences. Child Development , 77 , 427–442.

Schulz, L. E., Goodman, N. D., Tenenbaum, J. B., & Jenkins, A. C. (2008). Going beyond the evidence: Abstract laws and preschoolersÕ responses to anomalous data. Cognition , 109 , 211–223.

Seligman, M. E. (1975) Helplessness: On depression, development, and death . New York: WH Freeman/Times Books/Henry Holt & Co.

Shah, P., Harris, A. J., Bird, G., Catmur, C., & Hahn, U. (2016). A pessimistic view of optimistic belief updating. Cognitive Psychology , 90 , 71–127.

Sharot, T. (2011a) The Optimism Bias . New York: Vintage.

Sharot, T., & Garrett, N. (2016). Forming beliefs: why valence matters. Trends in Cognitive Sciences , 20 , 25–33.

Sharot, T., Korn, C. W., & Dolan, R. J. (2011b). How unrealistic optimism is maintained in the face of reality. Nature Neuroscience , 14 , 1475–1479.

Stankevicius, A., Huys, Q. J., Kalra, A., & Seriès, P. (2014). Optimism as a prior belief about the probability of future reward. PLoS Computational Biology , 10 , e1003605.

Stratton, G. M. (1897). Vision without inversion of the retinal image. Psychological Review , 4 , 341–360.

Strevens, M. (2001). The Bayesian treatment of auxiliary hypotheses. The British Journal for the Philosophy of Science , 52 , 515–537.

Strohminger, N., Knobe, J., & Newman, G. (2017). The true self: A psychological concept distinct from the self. Perspectives on Psychological Science .

Sunstein, C. R., & Vermeule, A. (2009). Conspiracy theories: Causes and cures. Journal of Political Philosophy , 17 , 202–227.

Swinburne, R. G. (1970) The Concept of Miracle . Berlin: Springer.

Swinburne, R. G. (2004) The Existence of God . Oxford: Oxford University Press.

Van Rooy, D., Van Overwalle, F., Vanhoomissen, T., Labiouse, C., & French, R. (2003). A recurrent connectionist model of group biases. Psychological Review , 110 , 536–563.

Vosniadou, S., & Brewer, W. F. (1992). Mental models of the earth: a study of conceptual change in childhood. Cognitive Psychology , 24 , 535–585.

Weber, R., & Crocker, J. (1983). Cognitive processes in the revision of stereotypic beliefs. Journal of Personality and Social Psychology , 45 , 961–977.

Weinstein, N. D. (1980). Unrealistic optimism about future life events. Journal of Personality and Social Psychology , 39 , 806–820.

Wu, Y., Muentener, P., & Schulz, L. E. (2015). The invisible hand: toddlers connect probabilistic events with agentive causes. Cognitive Science , 40 , 1854–1876.

Yeung, S., & Griffiths, T. L. (2015). Identifying expectations about the strength of causal relationships. Cognitive Psychology , 76 , 1–29.

Download references

Acknowledgments

I am grateful to Michael Strevens, Josh Tenenbaum, Tomer Ullman, Alex Holcombe, and Nick Chater for helpful discussions. This work was supported by the Center for Brains, Minds & Machines (CBMM), funded by NSF STC award CCF-1231216.

Author information

Authors and affiliations.

Department of Psychology and Center for Brain Science, Harvard University, 52 Oxford St., Room 295.05, Cambridge, MA, 02138, USA

Samuel J. Gershman

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Samuel J. Gershman .

Appendix: A sparse prior over auxiliary hypotheses

In this section, we define a sparse prior over auxiliary hypotheses using the Dirichlet distribution, which is the conjugate prior for the multinomial distribution. We focus on the case where the number of possible auxiliary hypotheses has a finite value (denoted by K ), though extensions to infinite spaces are possible Gershman and Blei ( 2012 ). The symmetric Dirichlet probability density function over the K -simplex is given by:

where Γ(⋅) is the Gamma function, and α > 0 is a concentration parameter that controls the sparsity of the distribution. As α approaches 0, the resulting distribution over auxiliary hypotheses, P ( a | 𝜃 ), places most of its probability mass on a small number of auxiliary hypotheses, whereas larger values of α induce distributions that evenly distribute their mass.

What are the consequences for the posterior over auxiliaries under the sparsity assumption ( α close to 0)? Let us consider the case where auxiliary a k predicts the observed data d with probability π k (marginalizing over h ). The posterior distribution is given by:

where \(\mathbb {I}[\cdot ]= 1\) if its argument is true, 0 otherwise. In the sparse limit ( α → 0), the posterior probability of an auxiliary is proportional to its agreement with the data: P ( a k | d ) ∝ π k . If we restrict ourselves to auxiliaries that predict the data perfectly ( π k = 1) or not at all ( π k = 0), then the resulting posterior will be uniform over auxiliaries consistent with the data. It follows that P ( d | h ¬ a ) = 1 in the sparse limit. Thus, sparsity favors auxiliaries that place high probability on the data, consistent with the assumptions underlying the analysis of Strevens ( 2001 ).

Rights and permissions

Reprints and permissions

About this article

Gershman, S.J. How to never be wrong. Psychon Bull Rev 26 , 13–28 (2019). https://doi.org/10.3758/s13423-018-1488-8

Download citation

Published : 24 May 2018

Issue Date : 15 February 2019

DOI : https://doi.org/10.3758/s13423-018-1488-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Bayesian modeling
  • Computational learning theories
  • Philosophy of science
  • Find a journal
  • Publish with us
  • Track your research

What Are the Elements of a Good Hypothesis?

Hero Images/Getty Images

  • Scientific Method
  • Chemical Laws
  • Periodic Table
  • Projects & Experiments
  • Biochemistry
  • Physical Chemistry
  • Medical Chemistry
  • Chemistry In Everyday Life
  • Famous Chemists
  • Activities for Kids
  • Abbreviations & Acronyms
  • Weather & Climate
  • Ph.D., Biomedical Sciences, University of Tennessee at Knoxville
  • B.A., Physics and Mathematics, Hastings College

A hypothesis is an educated guess or prediction of what will happen. In science, a hypothesis proposes a relationship between factors called variables. A good hypothesis relates an independent variable and a dependent variable. The effect on the dependent variable depends on or is determined by what happens when you change the independent variable . While you could consider any prediction of an outcome to be a type of hypothesis, a good hypothesis is one you can test using the scientific method. In other words, you want to propose a hypothesis to use as the basis for an experiment.

Cause and Effect or 'If, Then' Relationships

A good experimental hypothesis can be written as an if, then statement to establish cause and effect on the variables. If you make a change to the independent variable, then the dependent variable will respond. Here's an example of a hypothesis:

If you increase the duration of light, (then) corn plants will grow more each day.

The hypothesis establishes two variables, length of light exposure, and the rate of plant growth. An experiment could be designed to test whether the rate of growth depends on the duration of light. The duration of light is the independent variable, which you can control in an experiment . The rate of plant growth is the dependent variable, which you can measure and record as data in an experiment.

Key Points of Hypothesis

When you have an idea for a hypothesis, it may help to write it out in several different ways. Review your choices and select a hypothesis that accurately describes what you are testing.

  • Does the hypothesis relate an independent and dependent variable? Can you identify the variables?
  • Can you test the hypothesis? In other words, could you design an experiment that would allow you to establish or disprove a relationship between the variables?
  • Would your experiment be safe and ethical?
  • Is there a simpler or more precise way to state the hypothesis? If so, rewrite it.

What If the Hypothesis Is Incorrect?

It's not wrong or bad if the hypothesis is not supported or is incorrect. Actually, this outcome may tell you more about a relationship between the variables than if the hypothesis is supported. You may intentionally write your hypothesis as a null hypothesis or no-difference hypothesis to establish a relationship between the variables.

For example, the hypothesis:

The rate of corn plant growth does not depend on the duration of light.

This can be tested by exposing corn plants to different length "days" and measuring the rate of plant growth. A statistical test can be applied to measure how well the data support the hypothesis. If the hypothesis is not supported, then you have evidence of a relationship between the variables. It's easier to establish cause and effect by testing whether "no effect" is found. Alternatively, if the null hypothesis is supported, then you have shown the variables are not related. Either way, your experiment is a success.

Need more examples of how to write a hypothesis ? Here you go:

  • If you turn out all the lights, you will fall asleep faster. (Think: How would you test it?)
  • If you drop different objects, they will fall at the same rate.
  • If you eat only fast food, then you will gain weight.
  • If you use cruise control, then your car will get better gas mileage.
  • If you apply a top coat, then your manicure will last longer.
  • If you turn the lights on and off rapidly, then the bulb will burn out faster.
  • Null Hypothesis Definition and Examples
  • Six Steps of the Scientific Method
  • What Is a Hypothesis? (Science)
  • Understanding Simple vs Controlled Experiments
  • Dependent Variable Definition and Examples
  • How To Design a Science Fair Experiment
  • Null Hypothesis Examples
  • Scientific Method Vocabulary Terms
  • Scientific Method Flow Chart
  • What Are Independent and Dependent Variables?
  • Definition of a Hypothesis
  • Scientific Variable
  • What Is an Experiment? Definition and Design
  • What Is a Testable Hypothesis?
  • What Is a Control Group?

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Addict Sci Clin Pract
  • v.6(1); 2011 Jul

Logo of ascp

The Good Behavior Game and the Future of Prevention and Treatment

Sheppard g. kellam.

1 Bloomberg School of Public Health, The Johns Hopkins University, Baltimore, Maryland

Amelia C. L. Mackenzie

C. hendricks brown.

2 Department of Epidemiology and Public Health, University of Miami, Miami, Florida

Jeanne M. Poduska

3 American Institutes for Research, Washington, DC

4 University of South Florida, Tampa, Florida

Hanno Petras

5 JBS International, Bethesda, Maryland

Holly C. Wilcox

6 The Johns Hopkins University School of Medicine, Baltimore, Maryland

The Good Behavior Game (GBG), a universal classroom behavior management method, was tested in first- and second-grade classrooms in Baltimore beginning in the 1985–1986 school year. Followup at ages 19–21 found significantly lower rates of drug and alcohol use disorders, regular smoking, antisocial personality disorder, delinquency and incarceration for violent crimes, suicide ideation, and use of school-based services among students who had played the GBG. Several replications with shorter followup periods have provided similar early results. We discuss the role of the GBG and possibly other universal prevention programs in the design of more effective systems for promoting children’s development and problem prevention and treatment services.

Drug, alcohol, and tobacco abuse and dependence disorders; antisocial personality disorder; violence; high-risk sexual behavior; and other disorders and problem behaviors impose huge personal, social, and economic costs on individuals, families, schools, and communities. The burden is borne also by institutions that treat or attempt to rehabilitate such problem behaviors and disorders.

Disruptive and aggressive behavior in classrooms as early as the first grade has repeatedly been identified as a risk factor for this spectrum of problems later in life ( Kellam et al., 2008 ). The Good Behavior Game (GBG) is a classroom-wide, teacher-implemented intervention that aims to improve classroom behavior and introduce young children to the role of being a student and a member of the classroom community.

In 1985, in close partnership with the Baltimore City Public School System (BCPSS), we initiated a large-scale, developmental field trial of the GBG that was epidemiologically based and randomized. The trial was implemented in 41 first- and second-grade classrooms within 19 elementary schools with two consecutive cohorts of first graders. The results in young adulthood were reported in a supplemental issue of Drug and Alcohol Dependence in June 2008. Here we summarize the theoretical basis, design, and results of the trial, which together lead to three conclusions:

  • Aggressive and disruptive behaviors in childhood play a causal role in a spectrum of social, behavioral, and psychiatric problems;
  • Introducing the GBG in first- and second-grade classrooms reduces the risk of some of these problems later in the life course;
  • The effectiveness of the GBG supports a role for universal prevention interventions in a redesigned system for child development and problem prevention and treatment.

An external file that holds a picture, illustration, etc.
Object name is ASCP-06-1-73-g001.jpg

A classroom playing the Good Behavior Game in Denver.

We also briefly review the findings to date of ongoing replication trials and address the implications of this work for researchers, practitioners, advocates, and policymakers. We believe that the underlying theory, data, and analyses support the development of a newly designed human development services system that integrates prevention and treatment and is closely interrelated with schools and classrooms.

The GBG was developed to help teachers manage classrooms without having to respond on an individual basis each time a student disrupted class.

THE GOOD BEHAVIOR GAME

The GBG was developed to help teachers manage classrooms without having to respond on an individual basis each time a student disrupted class. As designed by University of Kansas researchers Harriet Barrish, Muriel Saunders, and Montrose Wolf, the GBG increases a teacher’s precision and consistency in instructing elementary school students in appropriate classroom behavior. In documenting the effectiveness of the approach, an early observer noted reduced “talking out of turn” and “out of seat” behavior during times when the class played the GBG ( Barrish, Saunders, and Wolf, 1969 ).

Our first-generation, large-scale randomized field trials of the GBG in Baltimore began in the 1985–1986 school year. By that time, the positive results reported by Barrish and colleagues had been replicated in more than 20 small observational, nonrandomized studies that showed short-term improvement in student classroom behavior.

How the Game Was Played

Teachers used a manual to ensure precision in the implementation of the GBG and to support fidelity over time and replicability in other trial sites. Early in the first-grade year, teachers displayed a large poster that listed the rules of proper student behavior—for example, sitting still, talking in turn, and paying attention. Toward the end of the first quarter of the school year, when classroom membership had stabilized, teachers divided their students into three teams that were balanced as to gender, aggressive and disruptive behavior, and shy or isolated behavior.

Initially, the GBG was played for designated periods of 10 minutes, three times a week. Each team was rewarded when all of its members behaved well during that interval, but not when the team had more than four rule infractions. In this way, the team’s rewards were contingent on each member behaving well.

As the year continued, the GBG was played for increasing lengths of time and when students were working individually. In this way, the GBG facilitated learning without competing for instructional time. As the school year progressed, the rewards changed from tangible and immediate (e.g., stickers, erasers) to more abstract and deferred (e.g., gold stars, more time to do enjoyable activities).

Why the Teacher and Classroom?

The GBG treats the classroom as a community. The teacher is central to the GBG, because he or she sets the rules for becoming a successful student and member of the community and also determines whether each child succeeds or fails. The GBG improves the precision with which the teacher conveys and the child receives these rules, and by doing so improves the teacher-child interaction and the child’s chances for success. In addition, in GBG trials, the better behaved children were observed to influence and socially integrate the children who behaved less appropriately.

Why the First Grade?

Two considerations recommend the first grade as a setting for preventive interventions:

  • Beginning first grade is a major transition for both the child and his or her family;
  • First grade is generally the first place where all children—that is, those at all levels of risk of school and behavior problems—can be found. All States in the United States require parents to register their children for first grade with the school district; in many States, this is the first required contact between children and any official system subsequent to birth registration.

The first-grade classroom is well-suited for interventions, such as the GBG, that focus on inculcating the role of students in classrooms. First grade is the first setting outside the home where many children learn the social and behavioral skills they will need to succeed in school. Although some children attend Head Start, kindergarten, or other preschool programs, the length and content of these programs vary.

The first grade is also a particularly appropriate setting in which to provide teachers with tools, as the GBG does, for effective classroom behavioral management. Early in this school year, teachers must organize the classroom, manage children’s behavior, and teach rules, but these skills are not intuitive. For example, children in our GBG trial were assigned to first-grade classrooms in a manner that ensured that the classrooms were equivalent with regard to behavior at the start of the school year. However, by the end of the first quarter, when we examined the behavior in the classrooms that had not participated in the GBG, we found that about half were doing relatively well in regard to aggressive and disruptive behavior, while the other half appeared markedly chaotic ( Kellam et al., 1998a ) (see box ).

TOOLS THAT TEACHERS NEED

School teachers very often report having received little training in tested methods of classroom behavior management. Pre-service teacher training does not emphasize this area, nor does the National Council for Accreditation of Teacher Education (NCATE) require proof of proficiency in this area for schools to be accredited ( NCATE, 2008 ).

Teachers—especially new ( National Commission on Teaching and America’s Future, 1997 ) and elementary school teachers ( Walter, Gouze, and Lim, 2006 )—rate such training as a pressing need. A lack of effective tools to socialize children into the role of student hampers their instruction. The challenge posed by aggressive and disruptive behavior overwhelms many teachers, leading to burnout and resignation from the profession.

The 1985 Baltimore GBG trial provided further evidence that the quality of classroom behavioral management in early grades has far-reaching consequences. Analysis of the data on children who attended standard program (i.e., non-GBG) first-grade classrooms showed a marked influence on the risk of severe aggressive behavior by middle school. Among children in well-managed classrooms, those rated in the top 25 percent for aggressive and disruptive behaviors were up to 2.7 times as likely as the average child to exhibit severe aggressive behavior by middle school. In contrast, in poorly managed classrooms, the risk differential was up to 59 times.

THE THEORY GUIDING THE TRIAL

Prevention trials yield the most insight when they are based on a research-backed theory about causes. For the past 4 decades, life course/social field theory has been a foundation for our research on early developmental risk factors and associated adult problem outcomes and their prevention ( Kellam et al., 1976 ). The theory has pointed to what we needed to measure and what interventions might be effective.

Life course/social field theory provides a dual-faceted view of mental health. In this perspective, adaptation has a social dimension and an individual, psychological dimension.

Social task demands in the classroom include an expectation that children will pay attention, obey rules, learn, and socialize appropriately with their peers and teachers.

The social dimension focuses on how an individual is viewed by society, both overall and within specific social contexts. At each stage of life, there are a few main social fields where individuals face social task demands. For children, the classroom is such a field, where social task demands include an expectation that they will pay attention, obey rules, learn, and socialize appropriately with their peers and teachers. In each social field, the person’s ability to meet task demands is assessed or rated by individuals we call natural raters. Teachers and student peers are natural raters in classrooms.

Sometimes this rating process is formal, as in the case of teachers giving grades. At other times, it is informal, as when peers respond to a student. Even when ratings are less formal, however, outcomes such as rejection from the peer group can be very powerful. We call this process of demand and response “social adaptation” and the resulting outcome, “social adaptational status.”

An individual may be rated as maladapted for reasons that originate with himself or herself, with the rater, or in the process of demand and response between the two. A first-grader, for example, may behave inappropriately due to a developmental lag in ability to sit still and attend, because the teacher lacks effective methods to socialize students to behave appropriately, or because previous persistent bad behavior has created tension between the teacher and the student.

According to life course/social field theory, improving the way teachers socialize children in the classrooms will result in improved social adaptation of the children in the classroom social field. The theory also predicts that this early improved social adaptation will lead to better adaptation to other social fields over the life course ( Figure 1 ). It is this hypothesis that supports using an intervention like the GBG in first and second grade.

An external file that holds a picture, illustration, etc.
Object name is ASCP-06-1-73-g002.jpg

Life Course/Social Field Concept

The second dimension in life course/social field theory is the individual’s internal condition, or psychological well-being. Depression, anxiety, and thought disorder are examples of poor psychological well-being. Psychological well-being and social adaptational status can reciprocally influence each other over the course of development. For example, receiving poor grades may make a child feel depressed, and depression may make a child more likely to get poor grades. Although the GBG’s effects on psychological well-being are beyond the scope of this paper, we have reported on its impact on suicidal thoughts and attempts, and we continue to study this dimension ( Kellam et al., 2008 ; Wilcox et al., 2008 ).

RESEARCH DESIGN

The trial in the BCPSS tested two classroom interventions. The GBG focused on aggressive and disruptive behavior and is the subject of this paper. An enhanced reading intervention that aimed to improve classroom performance was also tested, but is only mentioned here to provide a complete picture of the study design.

Altogether, 41 classes in 19 schools in five socio-demographically distinct areas of Baltimore participated in the trial. All the students were of low to lower middle socioeconomic status, and 70 percent were African-Americans.

Assignment of Intervention Conditions

Within each urban area, three or four schools were matched and randomly assigned to deliver the GBG, the enhanced reading curriculum program, or no intervention. All students in all schools received the standard first-grade educational program.

Within each intervention school, the principal sequentially assigned students to a first-grade classroom by using an alphabetized list. The research staff then checked and in a few cases adjusted the class rosters with the principal to provide an equivalent distribution of children across classrooms with respect to gender, kindergarten records of behavior, socioeconomic status, and other criteria. Then, within the GBG intervention schools, each first-grade classroom with its teacher was randomly assigned to be a GBG classroom or a standard-program classroom.

This design created three types of control classrooms to compare with the GBG classrooms: (1) standard program classrooms within the schools where the GBG was tested; (2) standard program classrooms within the schools where the enhanced reading curriculum program was tested; and (3) all classrooms within the schools where no intervention was tested. These three controls allowed for extensive analyses that strengthened our confidence in the results. For example, when comparing intervention and standard program classrooms within the GBG schools, we eliminated school and community variation as potential explanations for any differences. Our comparisons of GBG classrooms and standard program classrooms in other schools allowed us to rule out intervention “leakage” into control classrooms within the GBG schools. Using the three kinds of controls, we were also able to collect more information about school- and individual-level variation and compare the consistency of the results across schools and urban areas.

The trial included two consecutive cohorts of children. The first cohort began first grade in 1985. The teachers who had been randomly assigned to deliver the GBG intervention received 40 hours of training in GBG implementation, followed by supportive mentoring and monitoring during the school year. When the students in the GBG classrooms advanced to second grade the next fall, their new teachers received the same training and support as their first-grade teachers and implemented the intervention again.

Also in the fall of 1986, the second cohort began first grade. The same first-grade teachers who had implemented the GBG in 1985 did so again with their new students. They received little retraining, because we assumed that they would continue implementing the GBG with fidelity.

The resources invested in an intervention can have an effect on outcomes independent of the intervention. To minimize differences due to such effects, we provided all teachers in standard-program classrooms with activities comparable in extent to GBG training and support. The focus of these activities—for example, meetings of teachers from different schools and trips with the children—was not on classroom behavior management.

Teacher ratings have considerable predictive power concerning children’s outcomes well into adulthood.

Behavior Measurement

Our primary outcome measures were teachers’ ratings of children’s social adaptation. A teacher’s judgment about how a student is responding to classroom social demands is vitally important, because the teacher strongly influences whether the child continues to the next grade. The teacher is not only a predictor but also a participant in the child’s successes or failures. Teacher ratings have considerable predictive power regarding children’s outcomes well into adulthood.

The teachers rated the children on the Teacher Observation of Classroom Adaptation–Revised (TOCA–R) scale ( Kellam et al., 1976 ; Werthamer-Larsson, Kellam, and Wheeler, 1991 ). Ratings were obtained in the fall and spring of grades 1 and 2 and thereafter in the spring of grades 3 through 7.

Each time, a trained interviewer established a relationship of trust with the teacher in a quiet room in the school and then recorded ratings of each child, taking care to spend equal time on each child. Aggressive and disruptive behaviors were specified as: breaks rules, breaks things, fights, harms others, harms property, lies, teases classmates, takes others’ property, and yells at others. The teachers’ data were validated with other measures, such as classroom peer ratings and observation by independent observers.

Collection of Young Adult Data

When students reached ages 19–21, they were contacted to participate in a 90-minute telephone interview about their social adaptational status within their original family, school, work, intimate relationships, marital family (if any), and peer social fields. They also were asked about any use of services for problems with behavior, emotions, or drugs or alcohol, and about their developmental, behavioral, psychological, and psychiatric status. The Composite International Diagnostic Interview–University of Michigan (CIDI–UM) was used to determine psychiatric diagnoses based on the Diagnostic and Statistical Manual of Mental Disorders–IV (DSM–IV) criteria ( American Psychiatric Association, 1994 ; Kessler et al., 1994 ). Information was also obtained from school and juvenile court and adult incarceration records. A second interview at ages 20–23 was conducted in person to inquire about suicidal thoughts and attempts.

The interviewers did not know which participants had experienced the GBG. Of the students present in the fall of first grade in 1985, 75 percent were interviewed at the young adult followup by telephone or in person. No differences in rates of attrition were found between young adults who were in the GBG classrooms and those in the standard classrooms.

The GBG significantly reduced aggressive and disruptive behavior in primary school classrooms. In the first through sixth grades, students in GBG classrooms, especially the males, exhibited less aggressive and disruptive behavior than those in control classrooms ( Dolan et al., 1993 ). By the spring of sixth grade, males in GBG classrooms who had initially been rated above median levels for aggressive and disruptive behavior had significantly reduced these behaviors ( Kellam et al., 1994 ).

Among females, the levels of aggressive behavior were far lower than for males at the beginning of school and through seventh grade. The intervention did not appear to strongly influence such behavior among females ( Kellam et al., 1994 ; 1998a ; 1998b ).

Outcomes in Young Adulthood

Male students who had played the GBG in first grade reported significantly fewer problem outcomes at ages 19–21 than their peers who received the standard program. The results were particularly striking for those who had higher levels of aggressive and disruptive behaviors in first grade ( Table 1 ).

Young Adult Outcomes in GBG and Standard Classrooms

Female participants had much lower rates of aggressive and disruptive behaviors in first grade and lower rates of problem outcomes at ages 19–21. The GBG had little or no statistically significant effect on female outcomes except for suicidal thoughts and, to some extent, alcohol abuse and dependence disorders.

The effectiveness of the GBG was clearest for the most illicit behaviors and disorders—for example, drug abuse and dependence disorders, antisocial personality disorder, and incarceration for violence.

Results for the second cohort, first-graders in 1986, were similar, but there was some reduction of impact. The GBG still appeared to reduce drug abuse and dependence disorders, but instead of the higher risk children benefitting most, the benefit was more general. No significant benefit was seen for alcohol abuse and dependence disorders, regular smoking, or suicidal thoughts or attempts.

RESULTS FROM OTHER GBG TRIALS

Large-scale population-based randomized field trials of the GBG have been completed in three locations and are under way in three others ( Mackenzie, Lurye, and Kellam, 2008 ).

An external file that holds a picture, illustration, etc.
Object name is ASCP-06-1-73-g003.jpg

Baltimore, 1990s

A second trial in Baltimore in the early 1990s coupled the GBG with an enhanced curriculum and instruction program. The goal was to improve both behavior and achievement, possibly producing synergism and enhancing and expanding impact. By the end of the first and second grades, the combined intervention had significant positive effects on aggressive and disruptive behavior and achievement ( Ialongo et al., 1999 ). By the end of sixth grade, significant reductions occurred in teacher-rated conduct problems, diagnoses of conduct disorder, school suspensions, use of mental health services, and smoking ( Ialongo et al., 2001 ; Storr et al., 2002 ; Furr-Holden et al., 2004 ; Petras, Masyn, and Ialongo, in press ).

The GBG significantly reduced aggressive and disruptive behavior in primary school classrooms.

The GBG was replicated as a component of a population-based trial designed to target early antecedents of later problem outcomes through a multilevel preventive intervention in the first and fifth grades. The trial, called LIFT (Linking the Interests of Families and Teachers), significantly reduced student aggression during the intervention period and physical aggression following the intervention ( Reid et al., 1999 ; Stoolmiller, Eddy, and Reid, 2000 ). Followup analyses 3 years later showed reduced severity of attention deficit disorder behaviors in first-graders and, among fifth-graders, delayed time of first police arrest, association with misbehaving peers, and time to first patterned alcohol and marijuana use ( Eddy et al., 2003 ; 2005 ; Reid and Eddy, 2002 ). Further followup of fifth-graders until the end of high school showed significantly reduced overall use of tobacco, alcohol, and illicit drugs ( DeGarmo et al., 2009 ).

Netherlands

The GBG was implemented in the first and second grades in the Netherlands. The results showed that the intervention reduced attention deficit hyperactivity problems. Among the initially more disruptive students, a reduction in conduct problems was seen by the end of third grade ( van Lier et al., 2004 ). By age 10, large reductions were documented in antisocial behavior, and these reductions were associated with lower levels of peer rejection and increased affiliation with nondeviant peers ( van Lier, Vuijk, and Crijnen, 2005 ; Witvliet et al., 2009 ; van Lier et al., 2011). The GBG also reduced physical and relational victimization at age 10 and major depressive disorder, generalized anxiety disorder, and panic disorder/agoraphobia by age 13 ( Vuijk et al., 2007 ). Further analysis revealed that these reductions in depression and anxiety were mediated by the reductions in relational victimization for girls and physical victimization for boys ( Vuijk et al., 2007 ). Use of tobacco, but not alcohol, between ages 10 and 13 was also reduced among children in GBG classrooms ( van Lier, Huizink, and Crijnen, 2009 ). Later replications of the GBG implemented in the Netherlands showed similar benefits.

Maladapting to the classroom social task demands as early as first grade markedly increases the risk of later serious problems.

In an epidemiologically based trial of the GBG in Belgium, Leflot and colleagues reported significant reductions in aggressive and disruptive behavior, increases in on-task behavior, decreases in talking-out behavior, and decreases in the development of oppositional behavior. These results were mediated by decreases in negative teacher remarks ( Leflot et al., 2010 ).

LESSONS LEARNED

The main lesson learned from the GBG trials is that a classroom behavior management intervention directed at aggressive and disruptive behavior in first and second grade can improve children’s long-term outcomes. The results of these trials show that such behaviors are malleable to effective universal methods applied with fidelity and consistency.

The improved young adult outcomes of male children who played the GBG point strongly to the conclusion that first-grade classrooms are extremely important to children’s development. As many previous studies have reported, maladapting to the classroom social task demands as early as first grade markedly increases the risk of later serious problems. For example, Ensminger and Slusarcick (1992) reported that males’ first-grade aggressive behavior coupled with poor academic achievement predicted future school dropout, drug abuse, and criminal behavior. The effect size achieved by the GBG is not surprising when we consider that a child’s success or failure in learning to read in the first grade makes a substantial difference to his or her future success in school and beyond.

The impact of the GBG among highly aggressive and disruptive male first-graders—the group most at risk for antisocial and criminal outcomes—adds dramatically to our understanding of such children. The results are consistent with the inference that these behaviors play an etiological role in the development of substance use, antisocial and violent criminal behavior, suicide, and other damaging outcomes.

The minimal impact of the GBG among females calls loudly for further study. Girls’ aggressive and disruptive behavior does not appear to have the same importance as boys’: It is less prevalent, is less enduring from early to later schooling, and appears less salient for females’ long-term development. There is an urgent need for developmental epidemiological studies to understand females’ developmental pathways and provide a basis for designing interventions for them.

The Need for Partnerships

Prevention research and programming can succeed only when they are accepted by the community’s cultural, social, and political structure ( Kellam, 2000 ). The GBG trials have been possible because their aims have accorded with the mission of the communities in which they were conducted. For example, the BCPSS was willing to commit its resources and expose its students to the research out of deep concern over the problem of socializing young children to be successful students. An equally critical condition for success was that the BCPSS and community exercised oversight over the adaptation of the GBG for their schools and the design and implementation of the trial. Community oversight can necessitate intense working through of issues, but without it the chances are slim that a prevention program will be adopted, even if it proves effective in trials. In the GBG trial, for example, the families challenged the researchers to show that the randomized design was consistent with the researchers’ commitment to carry out the study in accord with the community’s values. Ultimately, after intensive discussions and trust building, the families came to see randomization as creating an “even playing field,” where every child had the same odds of receiving the GBG or standard program. Moreover, everyone would benefit if the GBG performed as hoped and was accordingly adopted into the curriculum.

This model of partnership for research and later implementation represents the foundation of the next generation of public health, public education, and prevention and treatment research. Researchers will need to understand the mission and vision of local community and institutional leaders, such as ministers and block club leaders, school superintendents, and clinic and other service providers. To ensure that prevention research and programming are conducted and administered with fidelity and continuity over time, researchers will need to integrate “silos,” bringing together political and agency leaders at the federal, regional, state, city/county, and local levels. Unfortunately, the formation of such partnerships is still not well-taught in graduate schools.

Networks for Replication

The GBG has now been tested in many pre-post and short-term studies and three large-scale population-based randomized field trials, and further trials are under way in Colorado; Houston, Texas; and Oxfordshire, England. To accelerate these and future replications, and to maximize the information learned from them, we are in the early stage of planning, with NIDA support, a GBG International Network of researchers and their policymaking and institutional partners.

The development of such networks is just beginning in the drug abuse field. However, they are essential for efficiently assessing the effectiveness of prevention interventions through replications on a progressively larger scale and in diverse contexts—to find out what works, for whom, and under what cultural and institutional conditions. Researchers, policymakers, and practitioners will benefit from sharing experiences related to theory, measures, analyses, and obstacles to moving interventions beyond effectiveness trials and into implementation and stages of going to scale. Networks can expedite implementation and expansion into practice by including policymakers and practitioners on the same teams as the researchers.

Integrating Replication and Implementation

Moving the GBG from observational studies to systematic population-based randomized field trials and their long-term outcomes and replication in other sites has taken more than 25 years. For a prevention model developed today, this would be an unacceptably long time. Better theory and new designs and statistical methods make possible more rapid advances from research into practice.

An important new strategy combines replication with expanding previously tested programs system-wide or moving them into new community sites. The first stage of moving a program into new sites or into practice is developing a partnership among community advocates, policymakers, service providers, and the research team that carried out the effectiveness trials. The second stage involves training a cadre of implementers to lead the training of additional implementers. As training proceeds, criteria and instruments used during the effectiveness trial can be streamlined and used to measure the effectiveness of the newly trained implementers. Such a strategy can include the designation of waves of trainees such that some would receive training while others awaited the next wave. Trainees could be randomized if their numbers reached levels that required wait-listing ( Brown et al., 2006 ).

Now is the time to think developmentally and epidemiologically about how an improved health system fits into a broader more functional child development system.

By creating representative stratified samples of schools within a new school district and randomly assigning the trial intervention and control conditions to schools at each stratum, researchers could test training and effectiveness at each stratum in the district. Moving on, the next tier of the stratified sample of schools could be covered in a successive randomized roll-out or “dynamic wait list” design ( Brown et al., 2006 ; 2009 ). With such designs and methods, the next generation of research, policy, and programming for fostering human development holds great promise.

TOWARD A NEW HUMAN SERVICES SYSTEM

The reform of our health system is at the forefront of our national political and social discourse. Now is the time to think developmentally and epidemiologically, particularly at the community level, about how an improved health system fits into a broader, more functional child development system. On the basis of our experience with the GBG, we suggest that the potential for such a system depends on expanded school information systems and implementation of staged intervention systems.

The Role of School Information Systems

The GBG trial represents a step toward a long-overdue integration of education research and public health prevention research. Further steps in this direction will be greatly facilitated by expansion of school information systems. As we consider the role of school information systems and community and researcher partnerships, the report entitled Community-Monitoring Systems: Tracking and Improving the Well-Being of America’s Children and Adolescents ( Mrazek, Biglan, and Hawkins, 2004 ; NIDA, 2007 ) gives important background information.

The universal strategy is the front line of a system of services that optimizes human development as well as physical health.

Most school information systems primarily monitor academic progress and problems and disciplinary actions. An ideal system would also record each child’s progress in emotional and behavioral development, including his or her special needs. The added parameters would inform educators, researchers, and clinicians concerning the child’s early risk factors for outcomes such as those measured in the GBG trial as well as family needs and other data. They would support more salient planning for—and responses to—the needs of the individual child, the classroom, and the school.

The No Child Left Behind (NCLB) law presents a unique opportunity to specify both educational and public health needs at the level of demographic epidemiology. NCLB establishes a national, state, and local repository of information that can be analyzed at levels from the national to the community and school district. Depending on the parameters included in NCLB assessments, they can furnish the data for epidemiological studies that show the broad distribution of educational and health-related problems and conditions related to them. These then can be used to plan and implement multilevel community partnerships that include service providers, community advocates, and research teams for testing and implementing effective programs. Communities That Care is one example of a program moving in this direction ( Hawkins et al., 2008a ; 2008b ).

Proper safeguards for confidentiality are possible, as they in fact already exist in a myriad of places where personal data are gathered, such as income taxes, medical records, and mail. Systems of restricted access are needed but should not block the integration of school and social services of other kinds, such as foster care placement, juvenile justice, and child welfare.

The Importance of Staged Interventions

The GBG is a universal intervention; it addresses the entire classroom population, not just those who are at higher risk. In public health, universal programs are usually the strategic first line of defense: Chlorine in drinking water, fluoride in toothpaste, and vaccines against influenza are examples.

Like most universal interventions, the GBG reduced some individuals’ risk and averted some adverse outcomes, but not everyone’s. In general, children who do not respond well to a universal intervention are candidates for selective prevention (based on persistent risk factors alone), indicated prevention (based on actual symptoms of incipient problems), or treatment.

A coordinated system of staged interventions, consisting of a tested universal intervention backed up by empirically proven group and individual interventions, meets the needs of individuals at all risk levels and stages of problem development. It yields efficiency and economy by differentiating lower risk individuals and higher risk responders from those who need more invasive and costly help. The GBG demonstrated another advantage of universal interventions: It does not single out, and thereby risk stigmatizing, children who manifest aggressive and disruptive behavior. Those who do not respond to universal programs can be reliably identified by their specific needs and enrolled in progressively more selective interventions.

The universal strategy is the front line of a system of services that optimizes human development as well as physical health and is central to the next-stage design of human development services we propose. The most logical place to start building this system is in schools and the agencies that are mandated to serve children with special needs. Pre- and perinatal parental interventions can be important prior prevention services. Once school starts and children become part of the information system, family prevention interventions, developed and tested largely as selective interventions, can be closely integrated as back-up to school-based universal interventions. Partnerships will have to be developed radiating out to community leaders and a broad range of agencies and institutions. The formation of the system and the system itself must be responsive to community values and aspirations, safeguard confidentiality, and ensure proper oversight by appropriate stakeholders.

The GBG, a universal intervention to manage classroom behavior, reduces schoolchildren’s aggressive and disruptive behavior and prevents drug abuse and dependence disorders, violent crime, and other adverse outcomes in young adulthood. Findings from completed and ongoing large-scale GBG trials support the hypothesis that aggressive and disruptive behavior as early as first and second grade plays an etiological role in these adverse outcomes. They also endorse the vision of a national, state, and local human services system, founded in schools, that integrates education and health research and employs a strategy of first-line universal and second-line selective and indicated prevention interventions, backed up by specific treatment programs. The initial work to get this system started has already been done.

Acknowledgments

During the past 22 years, this research has been supported by National Institute of Mental Health grants R01 MH 42968, P50 MH 38725, R01 MH 40859, and T32 MH018834, with supplements from NIDA for each of the research grants. In addition, work on this paper was supported by grants from NIDA (P30 DA027828 and DA009897). We thank Baltimore City Public School System (BCPSS) staff, especially Ms. Alice Pinderhughes, superintendent during the early years of the study; Dr. Leonard Wheeler, area superintendent; Dr. Patricia Welch, chair of the Board of School Commissioners; and Dr. Carla Ford, head of the kindergarten-through-third-grade reading instruction. We also thank the students who participated and their families. We acknowledge very important contributions of Dr. Jaylan Turkkan in training and supporting the teachers; Dr. Lawrence Dolan as overall intervention chief; Ms. Natalie Keegan, who rallied community support and participation; Dr. James Anthony, who contributed to trial design and early implementation; and Dr. Lisa Ulmer, who led the early assessment team and helped frame the research. We thank the late Dr. Charles R. Schuster, who as director of NIDA augmented the NIMH prevention research center grant with the support needed to carry out the trial. Finally, we are grateful for the editorial contributions by Mr. Matthew Malouf.

  • American Psychiatric Association . Diagnostic and Statistical Manual of Mental Disorders. IV ed. Washington, DC: American Psychiatric Association; 1994. [ Google Scholar ]
  • Barrish H, Saunders M, Wolf M. Good Behavior Game: Effects of individual contingencies for group consequences on disruptive behavior in a classroom. Journal of Applied Behavior Analysis. 1969; 2 (2):119–124. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Brown CH, et al. Dynamic wait-listed designs for randomized trials: New designs for prevention of youth suicide. Clinical Trials. 2006; 3 :259–271. [ PubMed ] [ Google Scholar ]
  • Brown CH, et al. Adaptive designs for randomized trials in public health. Annual Review of Public Health. 2009; 30 :1–25. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • DeGarmo DS, et al. Evaluating mediators of the impact of the Linking the Interests of Families and Teachers (LIFT) multimodal preventive intervention on substance use initiation and growth across adolescence. Prevention Science. 2009; 10 (3):208–220. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Dolan LJ, et al. The short-term impact of two classroom-based preventive interventions on aggressive and shy behaviors and poor achievement. Journal of Applied Developmental Psychology. 1993; 14 (3):317–345. [ Google Scholar ]
  • Eddy JM, et al. Outcomes during middle school for an elementary school-based preventive intervention for conduct problems: Follow-up results from a randomized trial. Behavior Therapy. 2003; 34 (4):535–552. [ Google Scholar ]
  • Eddy JM, et al. The Linking the Interests of Families and Teachers (LIFT) prevention program for youth antisocial behavior: Description, outcomes, and feasibility in the community. In: Epstein MH, Kutash K, Duchnowski AJ, editors. Outcomes for Children and Youth with Emotional and Behavioral Disorders and Their Families: Program and Evaluation Best Practices. 2nd Ed. Austin, TX: Pro-Ed, Inc; 2005. pp. 479–499. [ Google Scholar ]
  • Ensminger ME, Slusarcick AL. Paths to high school graduation or dropout: A longitudinal study of first grade cohort. Sociology of Education. 1992; 65 (2):95–113. [ Google Scholar ]
  • Furr-Holden CDM, et al. Developmentally inspired drug prevention: Middle school outcomes in a school-based randomized prevention trial. Drug and Alcohol Dependence. 2004; 23 :149–158. [ PubMed ] [ Google Scholar ]
  • Hawkins JD, et al. Early effects of Communities That Care on targeted risks and initiation of delinquent behavior and substance use. Journal of Adolescent Health. 2008a; 43 (1):15–22. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Hawkins JD, et al. Effects of social development intervention in childhood 15 years later. Archives of Pediatric and Adolescent Medicine. 2008b; 162 :1133–1141. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Ialongo NS, et al. Proximal impact of two first-grade preventive interventions on the early risk behaviors for later substance abuse, depression, and antisocial behavior. American Journal of Community Psychology. 1999; 27 :599–641. [ PubMed ] [ Google Scholar ]
  • Ialongo N, et al. The distal impact of two first-grade preventive interventions on the conduct problems and disorder in early adolescence. Journal of Emotional and Behavioral Disorders. 2001; 9 :146–160. [ Google Scholar ]
  • Kellam SG, et al. Mental Health and Going to School: The Woodlawn Program of Assessment, Early Intervention, and Evaluation. Chicago: University of Chicago Press; 1976. [ Google Scholar ]
  • Kellam SG, et al. The course and malleability of aggressive behavior from early first grade into middle school: Results of a developmental epidemiologically-based preventive trial. Journal of Child Psycholology and Psychiatry. 1994; 35 :259–282. [ PubMed ] [ Google Scholar ]
  • Kellam SG, et al. The effect of the level of aggression in the first grade classroom on the course and malleability of aggressive behavior into middle school. Development and Psychopathology. 1998a; 10 (2):165–185. [ PubMed ] [ Google Scholar ]
  • Kellam SG, et al. Effects of improving achievement on aggressive behavior and of improving aggressive behavior on achievement through two preventive interventions: An investigation of causal paths. In: Dohrenwend B, editor. Adversity, Stress, and Psychopathology. London: Oxford University Press; 1998b. pp. 486–505. [ Google Scholar ]
  • Kellam SG. Preventing School Violence: Plenary Papers of the 1999 Conference on Criminal Justice Research and Evaluation: Enhancing Policy and Practice through Research. Vol. 2. Washington, DC: National Institute of Justice; 2000. Community and institutional partnerships for school violence protection; pp. 1–21. [ Google Scholar ]
  • Kellam SG, et al. Effects of a universal classroom behavior management program in first and second grades on young adult behavioral, psychiatric, and social outcomes. Drug and Alcohol Dependence. 2008; 95 (Suppl 1):S5–S28. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Kessler R, et al. Lifetime and 12-month prevalence of DSM-III-R psychiatric disorders in the United States: Results from the National Co-morbidity Survey. Archives of General Psychiatry. 1994; 51 (1):8–19. [ PubMed ] [ Google Scholar ]
  • Leflot G, et al. The role of teacher behavior management in the development of disruptive behaviors: An intervention study with the Good Behavior Game. Journal of Abnormal Child Psychology. 2010; 38 (6):869–882. [ PubMed ] [ Google Scholar ]
  • Mackenzie AC, Lurye I, Kellam SG. History and evolution of the Good Behavior Game. Supplementary material for: Kellam, S.G., et al., 2008. Effects of a universal classroom behavior management program in first and second grades on young adult behavioral, psychiatric, and social outcomes. Drug and Alcohol Dependence. 2008; 95 (Suppl 1):S5–S28. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Mrazek P, Biglan A, Hawkins JD. Community-Monitoring Systems: Tracking and Improving the Well-Being of America’s Children and Adolescents. Society for Prevention Research; 2004. available at www.preventionresearch.org/CMSbook.pdf . [ Google Scholar ]
  • National Commission on Teaching and America’s Future . Doing What Matters Most: Investing in Quality Teaching. New York: National Commission on Teaching and America’s Future; 1997. [ Google Scholar ]
  • National Council for Accreditation of Teacher Education Professional Standards for the Accreditation of Teacher Preparation Institutions. 2008. Available at www.ncate.org/documents/standards/NCATE%20Standards%202008.pdf .
  • National Institute on Drug Abuse . Community Monitoring Systems: Tracking and Improving the Well-Being of America’s Children and Adolescents. National Institutes of Health, U.S. Department of Health and Human Services; 2007. available at www.nida.nih.gov/pdf/cms.pdf . [ Google Scholar ]
  • Petras H, Masyn K, Ialongo N. The developmental impact of two first grade preventive interventions on aggressive/disruptive behavior in childhood and adolescence: An application of latent transition growth mixture modeling. Prevention Science in press. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Reid JB, Eddy JM. Preventative efforts during the elementary school years: Linking the Interests of Families and Teachers (LIFT) project. In: Reid JB, Patterson GR, Snyder J, editors. Antisocial Behavior in Children and Adolescents: A Development Analysis and Model for Intervention. Washington, DC: American Psychological Association; 2002. pp. 219–233. [ Google Scholar ]
  • Reid JB, et al. Description and immediate impacts of a preventive intervention for conduct problems. American Journal of Community Psychology. 1999; 27 :483–517. [ PubMed ] [ Google Scholar ]
  • Stoolmiller M, Eddy JM, Reid JB. Detecting and describing preventive intervention effects in a universal school-based randomized trial targeting delinquent and violent behavior. Journal of Consulting and Clinical Psychology. 2000; 68 :296–306. [ PubMed ] [ Google Scholar ]
  • Storr CL, et al. A randomized controlled trial for two primary school intervention strategies to prevent early onset tobacco smoking. Drug and Alcohol Dependence. 2002; 66 :51–60. [ PubMed ] [ Google Scholar ]
  • van Lier PAC, et al. Preventing disruptive behavior in elementary schoolchildren: Impact of a universal classroom-based intervention. Journal of Consulting and Clinical Psychology. 2004; 72 :467–478. [ PubMed ] [ Google Scholar ]
  • van Lier PAC, Huizink A, Crijnen A. Impact of a preventive intervention targeting childhood disruptive behavior problems on tobacco and alcohol initiation from age 10 to 13 years. Drug and Alcohol Dependence. 2009; 100 (3):228–233. [ PubMed ] [ Google Scholar ]
  • van Lier PAC, Vuijk P, Crijnen AAM. Understanding mechanisms of change in the development of antisocial behavior: The impact of a universal intervention. Journal of Abnormal Child Psychology. 2005; 33 :521–535. [ PubMed ] [ Google Scholar ]
  • Vuijk P, et al. Testing sex-specific pathways from peer victimization to anxiety and depression in early adolscents through a randomized intervention trial. Journal of Affective Disorders. 2007; 100 :221–226. [ PubMed ] [ Google Scholar ]
  • Walter HJ, Gouze K, Lim KG. Teachers’ beliefs about mental health needs in inner city elementary schools. Journal of the American Academy of Child and Adolescent Psychiatry. 2006; 45 :61–68. [ PubMed ] [ Google Scholar ]
  • Werthamer-Larsson L, Kellam SG, Wheeler L. Effect of first grade classroom environment on child shy behavior, aggressive behavior, and concentration problems. American Journal of Community Psychology. 1991; 19 (4):585–602. [ PubMed ] [ Google Scholar ]
  • Wilcox HC, et al. The impact of two universal randomized first- and second-grade classroom interventions on young adult suicide ideation and attempts. Drug and Alcohol Dependence. 2008; 95 (Suppl 1):S60–S73. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Witvliet M, et al. Testing links between childhood positive peer relations and externalizing outcomes through a randomized controlled intervention study. Journal of Consulting and Clinical Psychology. 2009; 77 (5):905–915. [ PubMed ] [ Google Scholar ]

helpful professor logo

15 Hypothesis Examples

hypothesis definition and example, explained below

A hypothesis is defined as a testable prediction , and is used primarily in scientific experiments as a potential or predicted outcome that scientists attempt to prove or disprove (Atkinson et al., 2021; Tan, 2022).

In my types of hypothesis article, I outlined 13 different hypotheses, including the directional hypothesis (which makes a prediction about an effect of a treatment will be positive or negative) and the associative hypothesis (which makes a prediction about the association between two variables).

This article will dive into some interesting examples of hypotheses and examine potential ways you might test each one.

Hypothesis Examples

1. “inadequate sleep decreases memory retention”.

Field: Psychology

Type: Causal Hypothesis A causal hypothesis explores the effect of one variable on another. This example posits that a lack of adequate sleep causes decreased memory retention. In other words, if you are not getting enough sleep, your ability to remember and recall information may suffer.

How to Test:

To test this hypothesis, you might devise an experiment whereby your participants are divided into two groups: one receives an average of 8 hours of sleep per night for a week, while the other gets less than the recommended sleep amount.

During this time, all participants would daily study and recall new, specific information. You’d then measure memory retention of this information for both groups using standard memory tests and compare the results.

Should the group with less sleep have statistically significant poorer memory scores, the hypothesis would be supported.

Ensuring the integrity of the experiment requires taking into account factors such as individual health differences, stress levels, and daily nutrition.

Relevant Study: Sleep loss, learning capacity and academic performance (Curcio, Ferrara & De Gennaro, 2006)

2. “Increase in Temperature Leads to Increase in Kinetic Energy”

Field: Physics

Type: Deductive Hypothesis The deductive hypothesis applies the logic of deductive reasoning – it moves from a general premise to a more specific conclusion. This specific hypothesis assumes that as temperature increases, the kinetic energy of particles also increases – that is, when you heat something up, its particles move around more rapidly.

This hypothesis could be examined by heating a gas in a controlled environment and capturing the movement of its particles as a function of temperature.

You’d gradually increase the temperature and measure the kinetic energy of the gas particles with each increment. If the kinetic energy consistently rises with the temperature, your hypothesis gets supporting evidence.

Variables such as pressure and volume of the gas would need to be held constant to ensure validity of results.

3. “Children Raised in Bilingual Homes Develop Better Cognitive Skills”

Field: Psychology/Linguistics

Type: Comparative Hypothesis The comparative hypothesis posits a difference between two or more groups based on certain variables. In this context, you might propose that children raised in bilingual homes have superior cognitive skills compared to those raised in monolingual homes.

Testing this hypothesis could involve identifying two groups of children: those raised in bilingual homes, and those raised in monolingual homes.

Cognitive skills in both groups would be evaluated using a standard cognitive ability test at different stages of development. The examination would be repeated over a significant time period for consistency.

If the group raised in bilingual homes persistently scores higher than the other, the hypothesis would thereby be supported.

The challenge for the researcher would be controlling for other variables that could impact cognitive development, such as socio-economic status, education level of parents, and parenting styles.

Relevant Study: The cognitive benefits of being bilingual (Marian & Shook, 2012)

4. “High-Fiber Diet Leads to Lower Incidences of Cardiovascular Diseases”

Field: Medicine/Nutrition

Type: Alternative Hypothesis The alternative hypothesis suggests an alternative to a null hypothesis. In this context, the implied null hypothesis could be that diet has no effect on cardiovascular health, which the alternative hypothesis contradicts by suggesting that a high-fiber diet leads to fewer instances of cardiovascular diseases.

To test this hypothesis, a longitudinal study could be conducted on two groups of participants; one adheres to a high-fiber diet, while the other follows a diet low in fiber.

After a fixed period, the cardiovascular health of participants in both groups could be analyzed and compared. If the group following a high-fiber diet has a lower number of recorded cases of cardiovascular diseases, it would provide evidence supporting the hypothesis.

Control measures should be implemented to exclude the influence of other lifestyle and genetic factors that contribute to cardiovascular health.

Relevant Study: Dietary fiber, inflammation, and cardiovascular disease (King, 2005)

5. “Gravity Influences the Directional Growth of Plants”

Field: Agronomy / Botany

Type: Explanatory Hypothesis An explanatory hypothesis attempts to explain a phenomenon. In this case, the hypothesis proposes that gravity affects how plants direct their growth – both above-ground (toward sunlight) and below-ground (towards water and other resources).

The testing could be conducted by growing plants in a rotating cylinder to create artificial gravity.

Observations on the direction of growth, over a specified period, can provide insights into the influencing factors. If plants consistently direct their growth in a manner that indicates the influence of gravitational pull, the hypothesis is substantiated.

It is crucial to ensure that other growth-influencing factors, such as light and water, are uniformly distributed so that only gravity influences the directional growth.

6. “The Implementation of Gamified Learning Improves Students’ Motivation”

Field: Education

Type: Relational Hypothesis The relational hypothesis describes the relation between two variables. Here, the hypothesis is that the implementation of gamified learning has a positive effect on the motivation of students.

To validate this proposition, two sets of classes could be compared: one that implements a learning approach with game-based elements, and another that follows a traditional learning approach.

The students’ motivation levels could be gauged by monitoring their engagement, performance, and feedback over a considerable timeframe.

If the students engaged in the gamified learning context present higher levels of motivation and achievement, the hypothesis would be supported.

Control measures ought to be put into place to account for individual differences, including prior knowledge and attitudes towards learning.

Relevant Study: Does educational gamification improve students’ motivation? (Chapman & Rich, 2018)

7. “Mathematics Anxiety Negatively Affects Performance”

Field: Educational Psychology

Type: Research Hypothesis The research hypothesis involves making a prediction that will be tested. In this case, the hypothesis proposes that a student’s anxiety about math can negatively influence their performance in math-related tasks.

To assess this hypothesis, researchers must first measure the mathematics anxiety levels of a sample of students using a validated instrument, such as the Mathematics Anxiety Rating Scale.

Then, the students’ performance in mathematics would be evaluated through standard testing. If there’s a negative correlation between the levels of math anxiety and math performance (meaning as anxiety increases, performance decreases), the hypothesis would be supported.

It would be crucial to control for relevant factors such as overall academic performance and previous mathematical achievement.

8. “Disruption of Natural Sleep Cycle Impairs Worker Productivity”

Field: Organizational Psychology

Type: Operational Hypothesis The operational hypothesis involves defining the variables in measurable terms. In this example, the hypothesis posits that disrupting the natural sleep cycle, for instance through shift work or irregular working hours, can lessen productivity among workers.

To test this hypothesis, you could collect data from workers who maintain regular working hours and those with irregular schedules.

Measuring productivity could involve examining the worker’s ability to complete tasks, the quality of their work, and their efficiency.

If workers with interrupted sleep cycles demonstrate lower productivity compared to those with regular sleep patterns, it would lend support to the hypothesis.

Consideration should be given to potential confounding variables such as job type, worker age, and overall health.

9. “Regular Physical Activity Reduces the Risk of Depression”

Field: Health Psychology

Type: Predictive Hypothesis A predictive hypothesis involves making a prediction about the outcome of a study based on the observed relationship between variables. In this case, it is hypothesized that individuals who engage in regular physical activity are less likely to suffer from depression.

Longitudinal studies would suit to test this hypothesis, tracking participants’ levels of physical activity and their mental health status over time.

The level of physical activity could be self-reported or monitored, while mental health status could be assessed using standard diagnostic tools or surveys.

If data analysis shows that participants maintaining regular physical activity have a lower incidence of depression, this would endorse the hypothesis.

However, care should be taken to control other lifestyle and behavioral factors that could intervene with the results.

Relevant Study: Regular physical exercise and its association with depression (Kim, 2022)

10. “Regular Meditation Enhances Emotional Stability”

Type: Empirical Hypothesis In the empirical hypothesis, predictions are based on amassed empirical evidence . This particular hypothesis theorizes that frequent meditation leads to improved emotional stability, resonating with numerous studies linking meditation to a variety of psychological benefits.

Earlier studies reported some correlations, but to test this hypothesis directly, you’d organize an experiment where one group meditates regularly over a set period while a control group doesn’t.

Both groups’ emotional stability levels would be measured at the start and end of the experiment using a validated emotional stability assessment.

If regular meditators display noticeable improvements in emotional stability compared to the control group, the hypothesis gains credit.

You’d have to ensure a similar emotional baseline for all participants at the start to avoid skewed results.

11. “Children Exposed to Reading at an Early Age Show Superior Academic Progress”

Type: Directional Hypothesis The directional hypothesis predicts the direction of an expected relationship between variables. Here, the hypothesis anticipates that early exposure to reading positively affects a child’s academic advancement.

A longitudinal study tracking children’s reading habits from an early age and their consequent academic performance could validate this hypothesis.

Parents could report their children’s exposure to reading at home, while standardized school exam results would provide a measure of academic achievement.

If the children exposed to early reading consistently perform better acadically, it gives weight to the hypothesis.

However, it would be important to control for variables that might impact academic performance, such as socioeconomic background, parental education level, and school quality.

12. “Adopting Energy-efficient Technologies Reduces Carbon Footprint of Industries”

Field: Environmental Science

Type: Descriptive Hypothesis A descriptive hypothesis predicts the existence of an association or pattern related to variables. In this scenario, the hypothesis suggests that industries adopting energy-efficient technologies will resultantly show a reduced carbon footprint.

Global industries making use of energy-efficient technologies could track their carbon emissions over time. At the same time, others not implementing such technologies continue their regular tracking.

After a defined time, the carbon emission data of both groups could be compared. If industries that adopted energy-efficient technologies demonstrate a notable reduction in their carbon footprints, the hypothesis would hold strong.

In the experiment, you would exclude variations brought by factors such as industry type, size, and location.

13. “Reduced Screen Time Improves Sleep Quality”

Type: Simple Hypothesis The simple hypothesis is a prediction about the relationship between two variables, excluding any other variables from consideration. This example posits that by reducing time spent on devices like smartphones and computers, an individual should experience improved sleep quality.

A sample group would need to reduce their daily screen time for a pre-determined period. Sleep quality before and after the reduction could be measured using self-report sleep diaries and objective measures like actigraphy, monitoring movement and wakefulness during sleep.

If the data shows that sleep quality improved post the screen time reduction, the hypothesis would be validated.

Other aspects affecting sleep quality, like caffeine intake, should be controlled during the experiment.

Relevant Study: Screen time use impacts low‐income preschool children’s sleep quality, tiredness, and ability to fall asleep (Waller et al., 2021)

14. Engaging in Brain-Training Games Improves Cognitive Functioning in Elderly

Field: Gerontology

Type: Inductive Hypothesis Inductive hypotheses are based on observations leading to broader generalizations and theories. In this context, the hypothesis deduces from observed instances that engaging in brain-training games can help improve cognitive functioning in the elderly.

A longitudinal study could be conducted where an experimental group of elderly people partakes in regular brain-training games.

Their cognitive functioning could be assessed at the start of the study and at regular intervals using standard neuropsychological tests.

If the group engaging in brain-training games shows better cognitive functioning scores over time compared to a control group not playing these games, the hypothesis would be supported.

15. Farming Practices Influence Soil Erosion Rates

Type: Null Hypothesis A null hypothesis is a negative statement assuming no relationship or difference between variables. The hypothesis in this context asserts there’s no effect of different farming practices on the rates of soil erosion.

Comparing soil erosion rates in areas with different farming practices over a considerable timeframe could help test this hypothesis.

If, statistically, the farming practices do not lead to differences in soil erosion rates, the null hypothesis is accepted.

However, if marked variation appears, the null hypothesis is rejected, meaning farming practices do influence soil erosion rates. It would be crucial to control for external factors like weather, soil type, and natural vegetation.

The variety of hypotheses mentioned above underscores the diversity of research constructs inherent in different fields, each with its unique purpose and way of testing.

While researchers may develop hypotheses primarily as tools to define and narrow the focus of the study, these hypotheses also serve as valuable guiding forces for the data collection and analysis procedures, making the research process more efficient and direction-focused.

Hypotheses serve as a compass for any form of academic research. The diverse examples provided, from Psychology to Educational Studies, Environmental Science to Gerontology, clearly demonstrate how certain hypotheses suit specific fields more aptly than others.

It is important to underline that although these varied hypotheses differ in their structure and methods of testing, each endorses the fundamental value of empiricism in research. Evidence-based decision making remains at the heart of scholarly inquiry, regardless of the research field, thus aligning all hypotheses to the core purpose of scientific investigation.

Testing hypotheses is an essential part of the scientific method . By doing so, researchers can either confirm their predictions, giving further validity to an existing theory, or they might uncover new insights that could potentially shift the field’s understanding of a particular phenomenon. In either case, hypotheses serve as the stepping stones for scientific exploration and discovery.

Atkinson, P., Delamont, S., Cernat, A., Sakshaug, J. W., & Williams, R. A. (2021).  SAGE research methods foundations . SAGE Publications Ltd.

Curcio, G., Ferrara, M., & De Gennaro, L. (2006). Sleep loss, learning capacity and academic performance.  Sleep medicine reviews ,  10 (5), 323-337.

Kim, J. H. (2022). Regular physical exercise and its association with depression: A population-based study short title: Exercise and depression.  Psychiatry Research ,  309 , 114406.

King, D. E. (2005). Dietary fiber, inflammation, and cardiovascular disease.  Molecular nutrition & food research ,  49 (6), 594-600.

Marian, V., & Shook, A. (2012, September). The cognitive benefits of being bilingual. In Cerebrum: the Dana forum on brain science (Vol. 2012). Dana Foundation.

Tan, W. C. K. (2022). Research Methods: A Practical Guide For Students And Researchers (Second Edition) . World Scientific Publishing Company.

Waller, N. A., Zhang, N., Cocci, A. H., D’Agostino, C., Wesolek‐Greenson, S., Wheelock, K., … & Resnicow, K. (2021). Screen time use impacts low‐income preschool children’s sleep quality, tiredness, and ability to fall asleep. Child: care, health and development, 47 (5), 618-626.

Chris

Chris Drew (PhD)

Dr. Chris Drew is the founder of the Helpful Professor. He holds a PhD in education and has published over 20 articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education. [Image Descriptor: Photo of Chris]

  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 5 Top Tips for Succeeding at University
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 50 Durable Goods Examples
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 100 Consumer Goods Examples
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 30 Globalization Pros and Cons

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 17 May 2021

The natural selection of good science

  • Alexander J. Stewart   ORCID: orcid.org/0000-0001-5234-3871 1 &
  • Joshua B. Plotkin   ORCID: orcid.org/0000-0003-2349-6304 2  

Nature Human Behaviour volume  5 ,  pages 1510–1518 ( 2021 ) Cite this article

3918 Accesses

5 Citations

50 Altmetric

Metrics details

  • Cultural evolution
  • Scientific community

Scientists in some fields are concerned that many published results are false. Recent models predict selection for false positives as the inevitable result of pressure to publish, even when scientists are penalized for publications that fail to replicate. We model the cultural evolution of research practices when laboratories are allowed to expend effort on theory, enabling them, at a cost, to identify hypotheses that are more likely to be true, before empirical testing. Theory can restore high effort in research practice and suppress false positives to a technical minimum, even without replication. The mere ability to choose between two sets of hypotheses, one with greater prior chance of being correct, promotes better science than can be achieved with effortless access to the set of stronger hypotheses. Combining theory and replication can have synergistic effects. On the basis of our analysis, we propose four simple recommendations to promote good science.

This is a preview of subscription content, access via your institution

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 12 digital issues and online access to articles

111,21 € per year

only 9,27 € per issue

Rent or buy this article

Prices vary by article type

Prices may be subject to local taxes which are calculated during checkout

good behavior hypothesis

Similar content being viewed by others

good behavior hypothesis

High-throughput prediction of protein conformational distributions with subsampled AlphaFold2

Gabriel Monteiro da Silva, Jennifer Y. Cui, … Brenda M. Rubenstein

good behavior hypothesis

Negativity drives online news consumption

Claire E. Robertson, Nicolas Pröllochs, … Stefan Feuerriegel

good behavior hypothesis

Pleiotropy, epistasis and the genetic architecture of quantitative traits

Trudy F. C. Mackay & Robert R. H. Anholt

Data availability

All scripts and data to reproduce the results are available at https://doi.org/10.5281/zenodo.4616768 .

Code availability

All scripts necessary to reproduce the results are available at https://doi.org/10.5281/zenodo.4616768 .

Nissen, S. B., Magidson, T., Gross, K. & Bergstrom, C. T. Publication bias and the canonization of false facts. eLife 5 , e21451 (2016).

Article   PubMed   PubMed Central   Google Scholar  

Kerr, N. L. Harking: hypothesizing after the results are known. Pers. Soc. Psychol. Rev. 2 , 196–217 (1998).

Article   CAS   PubMed   Google Scholar  

Ioannidis, J. P. Why most published research findings are false. PLoS Med. 2 , e124 (2005).

Simmons, J. P., Nelson, L. D. & Simonsohn, U. False-positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychol. Sci. 22 , 1359–1366 (2011).

Article   PubMed   Google Scholar  

John, L. K., Loewenstein, G. & Prelec, D. Measuring the prevalence of questionable research practices with incentives for truth telling. Psychol. Sci. 23 , 524–532 (2012).

Simonsohn, U., Nelson, L. D. & Simmons, J. P. P-curve: a key to the file-drawer. J. Exp. Psychol. Gen. 143 , 534 (2014).

Rahal, R. & Collaboration, O. S. et al. Estimating the reproducibility of psychological science. Science 349 , aac4716 (2015).

Article   Google Scholar  

Begley, C. G. & Ioannidis, J. P. Reproducibility in science: improving the standard for basic and preclinical research. Circ. Res. 116 , 116–126 (2015).

Munafò, M. R. et al. A manifesto for reproducible science. Nat. Hum. Behav. 1 , 0021 (2017).

Klein, R. A. et al. Many labs 2: investigating variation in replicability across samples and settings. Adv. Methods Pract. Psychol. Sci. 1 , 443–490 (2018).

Ebersole, C. R. et al. Many labs 3: evaluating participant pool quality across the academic semester via replication. J. Exp. Soc. Psychol. 67 , 68–82 (2016).

Camerer, C. F. et al. Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015. Nat. Hum. Behav. 2 , 637 (2018).

Nosek, B. A. et al. Promoting an open research culture. Science 348 , 1422–1425 (2015).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Nosek, B. A., Ebersole, C. R., DeHaven, A. C. & Mellor, D. T. The preregistration revolution. Proc. Natl Acad. Sci. U. S. A. 115 , 2600–2606 (2018).

Munafò, M. R. & Davey Smith, G. Robust research needs many lines of evidence. Nature 553 , 399–401 (2018).

Gross, K. & Bergstrom, C. T. Contest models highlight inherent inefficiencies of scientific funding competitions. PLoS Biol. 17 , e3000065 (2019).

Smaldino, P. E., Turner, M. A. & Contreras Kallens, P. A. Open science and modified funding lotteries can impede the natural selection of bad science. R. Soc. Open Sci. 6 , 190194 (2019).

Smaldino, P. E. & McElreath, R. The natural selection of bad science. R. Soc. Open Sci. 3 , 160384 (2016).

Grimes, D. R., Bauch, C. T. & Ioannidis, J. P. A. Modelling science trustworthiness under publish or perish pressure. R. Soc. Open Sci. 5 , 171511 (2018).

Devezer, B., Nardin, L. G., Baumgaertner, B. & Buzbas, E. O. Scientific discovery in a model-centric framework: reproducibility, innovation, and epistemic diversity. PLoS ONE 14 , e0216125–e0216125 (2019).

Szollosi, A. et al. Is preregistration worthwhile? Trends. Cogn. Sci. 24 , 94–95 (2020).

Muthukrishna, M. & Henrich, J. A problem in theory. Nat. Hum. Behav. 3 , 221–229 (2019).

Smaldino, P. Better methods can’t make up for mediocre theory. Nature 575 , 9 (2019).

van Rooij, I. & Baggio, G. Theory before the test: how to build high-verisimilitude explanatory theories in psychological science. Perspect. Psychol. Sci. https://doi.org/10.1177/1745691620970604 (2021).

McElreath, R. & Smaldino, P. E. Replication, communication, and the population dynamics of scientific discovery. PLoS ONE 10 , e0136088 (2015).

O’Connor, C. The natural selection of conservative science. Stud. Hist. Philos. Sci. 76 , 24–29 (2019).

Traulsen, A., Nowak, M. A. & Pacheco, J. M. Stochastic dynamics of invasion and fixation. Phys. Rev. E 74 , 011909 (2006).

Mullon, C., Keller, L. & Lehmann, L. Evolutionary stability of jointly evolving traits in subdivided populations. Am. Nat. 188 , 175–95 (2016).

Leimar, O. Multidimensional convergence stability. Evol. Ecol. Res. 11 , 191–208 (2009).

Google Scholar  

Gray, C. T. & Marwick, B. in Statistics and Data Science (ed. Nguyen, H.) 111–129 (Springer, 2019).

Feynman, R. P. QED: The Strange Theory of Light and Matter (Princeton Univ. Press, 1985).

Hodgkin, A. L. & Huxley, A. F. A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physiol. 117 , 500–44 (1952).

MacKinnon, R. Nobel lecture. Potassium channels and the atomic basis of selective ion conduction. Biosci. Rep. 24 , 75–100 (2004).

Schwiening, C. J. A brief historical perspective: Hodgkin and Huxley. J. Physiol. 590 , 2571–2575 (2012).

Kahneman, D. & Tversky, A. Prospect theory: an analysis of decision under risk. Econometrica 47 , 263–291 (1979).

Barberis, N. C. Thirty years of prospect theory in economics: a review and assessment. J. Econ. Perspect. 27 , 173–96 (2013).

Mayr, E. Where are we? Cold Spring Harbor. Symp. Quant. Biol. 24 , 1–14 (1959).

Haldane, J. B. S. A defence of beanbag genetics. Perspect. Biol. Med. 7 , 343–359 (1964).

Ewens, W. J. Commentary: on Haldane’s ‘defense of beanbag genetics’. Int. J. Epidemiol. 37 , 447–51 (2008).

Crow, J. F. Mayr, mathematics and the study of evolution. J. Biol. 8 , 13 (2009).

Sarewitz, D. The pressure to publish pushes down quality. Nature 533 , 147 (2016).

Rawat, S. & Meena, S. Publish or perish: where are we heading? J. Res. Med. Sci. 19 , 87–89 (2014).

PubMed   PubMed Central   Google Scholar  

Dinis-Oliveira, R. J. & Magalhães, T. The inherent drawbacks of the pressure to publish in health sciences: good or bad science. F1000Research 4 , 419–419 (2015).

Kurt, S. Why do authors publish in predatory journals? Learn. Publ. 31 , 141–147 (2018).

Price, D. J. D. S. Little Science, Big Science (Columbia Univ. Press, 1963).

Book   Google Scholar  

Bornmann, L. & Mutz, R. Growth rates of modern science: a bibliometric analysis based on the number of publications and cited references. J. Assoc. Inform. Sci. Technol. 66 , 2215–2222 (2015).

Article   CAS   Google Scholar  

Download references

Acknowledgements

The authors thank P. Smaldino for constructive feedback. The authors received no specific funding for this work.

Author information

Authors and affiliations.

School of Mathematics and Statistics, University of St Andrews, St Andrews, UK

Alexander J. Stewart

Department of Biology, University of Pennsylvania, Philadelphia, PA, USA

Joshua B. Plotkin

You can also search for this author in PubMed   Google Scholar

Contributions

A.J.S. and J.B.P. conceived the project and developed the model. A.J.S. ran the simulations and analysed the model with input from J.B.P. A.J.S. and J.B.P. wrote the paper.

Corresponding authors

Correspondence to Alexander J. Stewart or Joshua B. Plotkin .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Peer review information   Nature Human Behaviour  thanks Timothy Parker, Jeffrey Schank and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary information.

Supplementary Figs. 1–12 and Supplementary Discussion.

Reporting summary

Rights and permissions.

Reprints and permissions

About this article

Cite this article.

Stewart, A.J., Plotkin, J.B. The natural selection of good science. Nat Hum Behav 5 , 1510–1518 (2021). https://doi.org/10.1038/s41562-021-01111-x

Download citation

Received : 16 March 2020

Accepted : 01 April 2021

Published : 17 May 2021

Issue Date : November 2021

DOI : https://doi.org/10.1038/s41562-021-01111-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

The advent and fall of a vocabulary learning bias from communicative efficiency.

  • David Carrera-Casado
  • Ramon Ferrer-i-Cancho

Biosemiotics (2021)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

good behavior hypothesis

IMAGES

  1. 13 Different Types of Hypothesis (2024)

    good behavior hypothesis

  2. The ABC’s of Behavior

    good behavior hypothesis

  3. What Makes A Good Hypothesis

    good behavior hypothesis

  4. Ep. 13: How to Write Useful FBA Hypothesis Statements

    good behavior hypothesis

  5. The DOs and DON'Ts of Hypothesis Statements: Step 3 of 5 to Meaningful

    good behavior hypothesis

  6. How to Write a Hypothesis: Definition, Types, Steps And Ideas

    good behavior hypothesis

VIDEO

  1. The Good Genes Hypothesis

  2. Good Behavior TNT Trailer #4

  3. Qualities of a good hypothesis

  4. The Importance of Good Science Education with Matt Beall

  5. Hypothesis

  6. Formulation of hypothesis and Deduction

COMMENTS

  1. How to Write a Great Hypothesis

    This article explores how a hypothesis is used in psychology research, how to write a good hypothesis, and the different types of hypotheses you might use. The Hypothesis in the Scientific Method . ... In psychology, the hypothesis might focus on how a certain aspect of the environment might influence a particular behavior.

  2. Ep. 13: How to Write Useful FBA Hypothesis Statements

    Writing our hypothesis statements is critical to the success of the intervention plan because they should lead you to what your behavioral solutions are going to be and in the blog post that goes with today's podcast, you will find a download that you can get that actually structures your hypothesis statements.

  3. How to Write a Research Hypothesis: Good & Bad Examples

    Another example for a directional one-tailed alternative hypothesis would be that. H1: Attending private classes before important exams has a positive effect on performance. Your null hypothesis would then be that. H0: Attending private classes before important exams has no/a negative effect on performance.

  4. How to Write a Strong Hypothesis

    5. Phrase your hypothesis in three ways. To identify the variables, you can write a simple prediction in if…then form. The first part of the sentence states the independent variable and the second part states the dependent variable. If a first-year student starts attending more lectures, then their exam scores will improve.

  5. Functional Behavioral Assessment Hypothesis Examples

    Once the behavior has been defined and data collected about the circumstances surrounding the student's actions, the next step is to write a hypothesis, which is a statement that presents the ...

  6. Step 2.3b Create a hypothesis statement for the purpose of the behavior

    Create a hypothesis (behavior) statement. A hypothesis statement should be based upon the assessment results and describes the best guess of the purpose of the behavior in sufficient detail. That is, what is the behavior trying to tell us? Analyzing assessment data helps team members identify patterns or behaviors across time and settings.

  7. Research Hypothesis In Psychology: Types, & Examples

    Examples. A research hypothesis, in its plural form "hypotheses," is a specific, testable prediction about the anticipated results of a study, established at its outset. It is a key component of the scientific method. Hypotheses connect theory to data and guide the research process towards expanding scientific understanding.

  8. How to Write a Strong Hypothesis

    Step 5: Phrase your hypothesis in three ways. To identify the variables, you can write a simple prediction in if … then form. The first part of the sentence states the independent variable and the second part states the dependent variable. If a first-year student starts attending more lectures, then their exam scores will improve.

  9. Developing a Hypothesis

    First, a good hypothesis must be testable and falsifiable. We must be able to test the hypothesis using the methods of science and if you'll recall Popper's falsifiability criterion, it must be possible to gather evidence that will disconfirm the hypothesis if it is indeed false. Second, a good hypothesis must be logical.

  10. PDF Analyzing Student Behavior: A Step-by-Step Guide

    A good method for judging whether the problem has been adequately defined is to apply the "stranger test" (Upah, ... The structure of a behavior hypothesis statement is simple: the teacher writes a description of the problem behavior (developed in an earlier step) and selects a hypothesis that best explains the behavior based on available ...

  11. Formulating Hypotheses for Different Study Designs

    Formulating Hypotheses for Different Study Designs. Generating a testable working hypothesis is the first step towards conducting original research. Such research may prove or disprove the proposed hypothesis. Case reports, case series, online surveys and other observational studies, clinical trials, and narrative reviews help to generate ...

  12. A practical guide for studying human behavior in the lab

    After you have developed a good sense of your hypothesis and a rough idea of what you need to measure (reaction times, recall accuracy on a memory task, etc.) to test it, start thinking about how you will frame your arguments in the prospective paper. ... Behavior Research Methods, Instruments, & Computers : A Journal of the Psychonomic Society ...

  13. PBS Process

    The behavior hypothesis statements summarize what is known about triggers, behaviors, and maintaining consequences and offers an informed guess about the purpose of the problem behavior. Once a functional assessment is complete, the next step is to develop a hypothesis statement—a prediction or "best guess" of the function or reason a ...

  14. What is a Hypothesis

    Examples of Hypothesis. Here are a few examples of hypotheses in different fields: Psychology: "Increased exposure to violent video games leads to increased aggressive behavior in adolescents.". Biology: "Higher levels of carbon dioxide in the atmosphere will lead to increased plant growth.".

  15. Psychology Hypothesis

    PDF. Size: 202 KB. Download. Psychology Hypothesis Statement Examples encompass a diverse range of human behaviors and mental processes. Dive into the complexities of the human mind with Simple hypothesis that explore relationships, patterns, and influences on behavior. From memory recall to social interactions, these examples offer insights ...

  16. Hypothesis Testing

    Present the findings in your results and discussion section. Though the specific details might vary, the procedure you will use when testing a hypothesis will always follow some version of these steps. Table of contents. Step 1: State your null and alternate hypothesis. Step 2: Collect data. Step 3: Perform a statistical test.

  17. 6 Hypothesis Examples in Psychology

    Alternative Hypothesis: Eating an apple daily reduces the chances of visiting the doctor. Null Hypothesis: Eating an apple daily does not impact the frequency of visiting the doctor. Example 2. Research Problem: What is the impact of spending a lot of time on mobiles on the attention span of teenagers.

  18. How to never be wrong

    This hypothesis was confirmed a year later through telescopic observation, and thus an 8th planet (Neptune) was added to the solar system. ... The answer is largely no. Bad behavior is attributed to the superficial self, whereas good behavior is attributed to the true self (Newman, Bloom, & Knobe, 2014).

  19. What Are the Elements of a Good Hypothesis?

    A hypothesis is an educated guess or prediction of what will happen. In science, a hypothesis proposes a relationship between factors called variables. A good hypothesis relates an independent variable and a dependent variable. The effect on the dependent variable depends on or is determined by what happens when you change the independent variable.

  20. The Good Behavior Game and the Future of Prevention and Treatment

    The Good Behavior Game (GBG) is a classroom-wide, teacher-implemented intervention that aims to improve classroom behavior and introduce young children to the role of being a student and a member of the classroom community. ... Findings from completed and ongoing large-scale GBG trials support the hypothesis that aggressive and disruptive ...

  21. 15 Hypothesis Examples (2024)

    15 Hypothesis Examples. A hypothesis is defined as a testable prediction, and is used primarily in scientific experiments as a potential or predicted outcome that scientists attempt to prove or disprove (Atkinson et al., 2021; Tan, 2022). In my types of hypothesis article, I outlined 13 different hypotheses, including the directional hypothesis ...

  22. The natural selection of good science

    Nonetheless, a stable good-science equilibrium persists even when the reward for publishing an attention-grabbing hypothesis is roughly twice as large as the reward for publishing a high-effort ...