A Case Study of Using Behavioural Science in Practice

How Southwest Airlines Used It to Improve the Boarding Experience

  • First Online: 20 November 2018

Cite this chapter

case study behavioral science

  • Helena Rubinstein 2  

395 Accesses

This is a detailed case study about how behavioural science was applied to improve the boarding experience at the gate that satisfied the needs of passengers, employees, and the airline operator, Southwest Airlines. The case study shows how the principles for applying behavioural science described in earlier chapters were used to solve the challenge. In this chapter, Rubinstein outlines the activities that enabled the development of a behavioural model of passenger behaviour, and how a five-step approach to intervention design was applied. It ends by showing some of the solutions that were devised and describes the results of a testing programme in a real-life situation in an airport in the USA.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Author information

Authors and affiliations.

Innovia Technology Ltd, Cambridge, UK

Helena Rubinstein

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

Copyright information

© 2018 The Author(s)

About this chapter

Rubinstein, H. (2018). A Case Study of Using Behavioural Science in Practice. In: Applying Behavioural Science to the Private Sector. Palgrave Pivot, Cham. https://doi.org/10.1007/978-3-030-01698-2_7

Download citation

DOI : https://doi.org/10.1007/978-3-030-01698-2_7

Published : 20 November 2018

Publisher Name : Palgrave Pivot, Cham

Print ISBN : 978-3-030-01697-5

Online ISBN : 978-3-030-01698-2

eBook Packages : Behavioral Science and Psychology Behavioral Science and Psychology (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Perspective
  • Published: 20 March 2023

A manifesto for applying behavioural science

  • Michael Hallsworth   ORCID: orcid.org/0000-0002-7868-4727 1  

Nature Human Behaviour volume  7 ,  pages 310–322 ( 2023 ) Cite this article

40k Accesses

37 Citations

224 Altmetric

Metrics details

  • Human behaviour

Recent years have seen a rapid increase in the use of behavioural science to address the priorities of public and private sector actors. There is now a vibrant ecosystem of practitioners, teams and academics building on each other’s findings across the globe. Their focus on robust evaluation means we know that this work has had an impact on important issues such as antimicrobial resistance, educational attainment and climate change. However, several critiques have also emerged; taken together, they suggest that applied behavioural science needs to evolve further over its next decade. This manifesto for the future of applied behavioural science looks at the challenges facing the field and sets out ten proposals to address them. Meeting these challenges will mean that behavioural science is better equipped to help to build policies, products and services on stronger empirical foundations—and thereby address the world’s crucial challenges.

Similar content being viewed by others

case study behavioral science

Insights from a cross-sector review on how to conceptualise the quality of use of research evidence

case study behavioral science

Mapping the community: use of research evidence in policy and practice

case study behavioral science

Realizing the full potential of behavioural science for climate change mitigation

There has been “a remarkable increase in behavioural studies and interventions in public policy on a global scale” over the past 15 years 1 . This growth has been built on developments taking place over many preceding decades. One was the increasing empirical evidence of the importance of non-conscious drivers of behaviour. While psychologists have studied these drivers since at least as far back as the work of William James and Wilhelm Wundt in the nineteenth century, they received renewed attention from the research agenda that showed how “heuristics and biases” influence judgement and decision-making 2 . These and other studies led many psychologists to converge on dual-process theories of behaviour that proposed that rapid, intuitive and non-conscious cognitive processes sit alongside deliberative, reflective and self-aware ones 3 .

These theories challenged explanations that foregrounded the role of conscious attitudes, motivations and intentions in determining actions 4 . One result was the creation of the field of behavioural economics, which developed new explanations for why observed behaviour diverged from existing economic models 5 . For example, the concept of “mental accounting” showed how people assign money to certain purposes and—contrary to standard economic theory—are reluctant to repurpose those sums, even when they might benefit from doing so 6 .

Behavioural economics may represent only one strand of applied behavioural science, but it has attracted substantial attention. By the mid-2000s, these advances had an increasingly receptive audience among some governments and policymakers 7 . The publication of the book Nudge in 2008 responded to this demand by using the evidence mentioned earlier to create practical policy solutions (Box 1 ) 8 . Then, in 2010, the UK government set up its Behavioural Insights Team 9 . The creation of the Behavioural Insights Team is notable because it became “a paradigmatic example for the translation of behavioural insights into public policy” that acted as “a blueprint for the establishment of similar units elsewhere” 10 , 11 , 12 . Similar initiatives were adopted by many public sector bodies at the local, national and supra-national levels and by private companies large and small 1 , 11 , 13 , 14 . The Organisation for Economic Development and Cooperation has labelled this creation of more than 200 dedicated public entities a “paradigm shift” 15 that shows that applied behavioural science has “taken root in many ways across many countries around the world and across a wide range of sectors and policy areas” 16 .

This history is necessarily selective; it does not attempt to cover the full range of work in the behavioural sciences. Rather, my focus is on the main ways that approaches often grouped under the term ‘behavioural insights’ have been applied to practical issues in the public and private sectors over the past 15 years 17 (see Box 1 for definitions of these and other terms). These approaches have been adopted in both developed and developing economies, and their precise forms of implementation have varied from context to context 18 . However, a crucial point to emphasize is that they have gone far beyond the self-imposed limits of nudges, even if that label is still used (often unhelpfully) as a blanket term. Instead, a broader agenda has emerged that explores how behavioural science can be integrated into core public and private sector activities such as regulation, taxation, strategy and operations. This broader agenda is reflected in the creation of research programmes on “behavioural public policy” 19 or “behavioural public administration” 20 .

Proponents of these approaches can point to improved outcomes in many areas, including health 21 , education 22 , sustainability 23 and criminal justice 24 . Yet criticisms have emerged alongside these successes. For example, there is an ongoing debate about how publication bias may have inflated the published effect sizes of nudge interventions 25 , 26 . Other criticisms target the goals, assumptions and techniques associated with recent applications of behavioural science (Box 2 ).

This Perspective attempts to respond to these criticisms by setting out an agenda to ensure that applied behavioural science can fulfil its potential in the coming decades. It does so by offering ten proposals, as summarized in Table 1 . These proposals fall into three categories: scope (the range and scale of issues to which behavioural science is applied), methods (the techniques and resources that behavioural science deploys) and values (the principles, ideals and standards of conduct that behavioural scientists adopt). These proposals are the product of a non-systematic review of relevant literature and my experience of applying behavioural science. They are not an attempt to represent expert consensus; they aim to provoke debate as well as agreement.

Figure 1 shows how each proposal aims to address one or more of the criticisms set out in Box 2 . Figure 1 also indicates how responsibilities for implementing the proposals are allocated among four major groups in the behavioural science ecosystem: practitioners (individuals or teams who apply behavioural science findings in practical settings), the clients who commission these practitioners (for example, public or private sector organizations), academics working in the behavioural sciences (including disciplines such as anthropology, economics and sociology) and funders who support the work of these academics. These groups constitute the ‘we’ referred to in the rest of the paper, which summarizes a full-length, in-depth report available at www.bi.team .

figure 1

The left side shows common criticisms made of the behavioural insights approach. The middle column presents ten proposals to improve the way behavioural science is applied. These proposals are organized into three categories (scope, methods and values), which are represented by red, blue and yellow, respectively. The arrows from the criticisms to the proposals show which of the latter attempt to address the former. The matrix on the right shows the four main groups involved with implementing the proposals: practitioners, clients, academics and funders. The dots in each column indicate that the relevant group will need to make a substantive contribution to achieving the goals of the proposal in the corresponding row.

Box 1 Glossary of main terms

Behavioural science . In its broadest sense, a discipline that uses scientific methods to generate and test theories that explain and predict the behaviour of individuals, groups and populations. This piece focuses particularly on the implications of dual-process theories of behaviour. Behavioural science is different from ‘the behavioural sciences’, which refers to a broader group of any scientific disciplines that study behaviour.

Behavioural insights . The application of findings from behavioural science to analyse and address practical issues in real-world settings, usually coupled with a rigorous evaluation of the effects of any interventions. In the current piece, this term is used interchangeably with ‘applied behavioural science’.

Behavioural economics . The application of findings from behavioural science to the field of economics to create explanations for economic behaviour that often diverge from the principles of neoclassical economic theory.

Nudge . The design of choices so that non-conscious cognitive processes lead individuals to select the option that leaves them better off, as judged by themselves. Nudges do not involve coercion or any substantial change to economic incentives, leaving people with a meaningful ability to choose a different option from the one that the choice architect intends.

Box 2 Criticisms of the behavioural insights approach

Limited impact . The approach has focused on more tractable and easy-to-measure changes at the expense of bigger impacts; it has just been tinkering around the edges of fundamental problems 29 , 50 , 172 .

Failure to reach scale . The approach promotes a model of experimentation followed by scaling, but it has not paid enough attention to how successful scaling happens—and the fact that it often does not happen 18 .

Mechanistic thinking . The approach has promoted a simple, linear and mechanistic approach to understanding behaviour that ignores second-order effects and spillovers (and employs evaluation methods that assume a move from A to B against a static background) 29 , 62 , 173 .

Flawed evidence base . The replication crisis has challenged the evidence base underpinning the behavioural insights approach, adding to existing concerns such as the duration of its interventions’ effects 79 , 174 .

Lack of precision . The approach lacks the ability to construct precise interventions and establish what works for whom, and when. Instead, it relies either on overgeneral frameworks or on disconnected lists of biases 80 , 92 , 94 .

Overconfidence . The approach can encourage overconfidence and overextrapolation from its evidence base, particularly when testing is not an option 175 .

Control paradigm . The approach is elitist and pays insufficient attention to people’s own goals and strategies; it uses concepts such as irrationality to justify attempts to control the behaviour of individuals, since they lack the means to do so themselves 176 , 177 .

Neglect of the social context . The approach has a limited, overly cognitive and individualistic view of behaviour that neglects the reality that humans are embedded in established societies and practices 125 , 178 , 179 .

Ethical concerns . The behavioural insights approach will face more ethics, transparency and privacy conundrums as it attempts more ambitious and innovative work 143 , 145 , 154 .

Homogeneity of participants and perspectives . The range of participants in behavioural science research has been narrow and unrepresentative 164 ; homogeneity in the locations and personal characteristics of behavioural scientists influences their viewpoints, practices and theories 124 , 166 .

Use behavioural science as a lens

The early phase of the behavioural insights movement was marked by scepticism about whether effects obtained in laboratories would translate to real-world settings 27 . In response, practitioners developed standard approaches that could demonstrate a clear causal link between an intervention and an outcome 28 . In practice, these approaches directed attention towards how the design of specific aspects of a policy, product or service influences discrete behaviours by actors who are considered mostly in isolation 29 .

These standard approaches are strong and have produced valuable results in many contexts around the world 20 , 30 . However, in the aggregate, they have also fostered a perspective centred on the metaphor of behavioural science as a specialist tool. This view mostly limits behavioural science to the role of fixing concrete aspects of predetermined interventions rather than aiding the consideration of broader policy goals 31 .

Over time, this view has created a self-reinforcing perception that only certain kinds of tasks are suitable for behavioural scientists 29 . Opportunities, skills and ambitions have been constricted as a result; a rebalancing is needed. Behavioural science also has much to say about pressing societal issues such as discrimination, pollution and economic mobility and the structures that produce them 32 , 33 . These ambitions have always been present in the behavioural insights movement 34 , but the factors just outlined acted against their being realized more fully 35 .

The first step towards achieving these ambitions is to replace the dominant metaphor of behavioural science as a tool. Instead, behavioural science should be understood as a lens that can be applied to any public or private issue. This change offers several advantages:

A lens metaphor shows that behavioural science can enhance the use of standard policy options (for example, revealing new ways of structuring taxes) rather than just acting as an alternative to them.

A lens metaphor conveys that the uses of behavioural science are not limited to creating new interventions. A behavioural science lens can, for example, help to reassess existing actions and understand how they may have unintended effects. It emphasizes the behavioural diagnosis of a situation or issue rather than pushing too soon to define a precise target outcome and intervention 31 .

Specifying that this lens can be applied to any action conveys the error of separating ‘behavioural’ and ‘non-behavioural’ issues: most of the goals of private and public action depend on certain behaviours happening (or not). Behavioural science should therefore be integrated into an organization’s core activities rather than acting as an optional specialist tool 36 .

It may seem odd to start with a change of metaphor, but the primary problem here is one of perception. Behavioural science itself shows us the power of framing: the metaphors we use shape the way we behave and therefore can be agents of change 37 . Metaphors are particularly important in this case because the task of broadening the use of behavioural science requires making a compelling case to decision makers 38 . The metaphor of behavioural science as a tool has established credibility and acceptance in a defined area; expanding beyond that area is the task for the next decade.

Build behavioural science into organizations

The second proposal is to broaden the scope of how behavioural science is used in organizations. Given that many dedicated behavioural science teams exist worldwide, it is understandable that much attention has been paid to the question of how they should be set up successfully. However, this focus has diverted attention from considering how to use behavioural science to shape organizations themselves 39 . We need to talk less about how to set up a dedicated behavioural science team and more about how behavioural science can be integrated into an organization’s standard processes. For example, as well as trying to ensure that a departmental budget includes provisions for behavioural science, why not use behavioural science to improve the way this budget is created (for example, are managers anchored to outdated spending assumptions) 40 ?

The overriding message here is for greater focus on the organizational changes that indirectly apply or support behavioural science principles, rather than just thinking through how the direct and overt use of behavioural science can be promoted in an organization. One advantage to this approach is that it can help organizations to address problems with scaling interventions 36 . If some of the barriers to scaling concern cognitive biases in organizations, these changes could minimize the effect of such biases 41 . Rather than starting with a behavioural science project and then trying to scale it, we could start by looking at operations at scale and understanding how they can be influenced.

It is useful to understand how this approach maps onto existing debates about how to set up a behavioural function in organizations. Doing so reveals six main scenarios, as shown in Table 2 . In the ‘baseline’ scenario, there is limited awareness of behavioural science in the organization, and its principles are not incorporated into processes. In the ‘nudged organization’, behavioural science awareness is still low, but its principles have been used to redesign processes to create better outcomes for staff or service users. In ‘proactive consultancy’, leaders may have set up a dedicated behavioural team without grafting it onto the organization’s standard processes. This lack of institutional grounding puts the team in a less resilient position, meaning that it must always search for new work. In ‘call for the experts’, an organization has concentrated behavioural expertise, but there are also prompts and resources that allow this expertise to be integrated into business as usual. Expertise is not widespread, but access to it is. Processes stimulate demand for behavioural expertise that the central team can fulfil. In ‘behavioural entrepreneurs’, there is behavioural science capacity distributed throughout the organization, through either direct capacity building or recruitment. The problem is that organizational processes do not support these individual pockets of knowledge. Finally, a ‘behaviourally enabled organization’ is one where there is knowledge of behavioural science diffused throughout the organization, which also has processes that reflect this knowledge and support its deployment.

Most discussions make it seem like the meaningful choice is between the different columns in Table 2 —how to organize dedicated behavioural science resources. Instead, the more important move is from the top row to the bottom row: moving from projects to processes, from commissions to culture. A useful way of thinking about this task is about building or upgrading the “choice infrastructure” of the organization 42 . In other words, we should place greater focus on the institutional conditions and connections that support the direct and indirect ways that behavioural science can infuse organizations.

Working out how best to build the choice infrastructure in organizations should be a major priority for applied behavioural science. Already we can see that some features will be crucial: reducing the costs of experimentation, creating a system that can learn from its actions, and developing new and better ways of using behavioural science principles to analyse the behavioural effects of organizational processes, rules, incentives, metrics and guidelines 36 .

See the system

Many important policy challenges emerge from complex adaptive systems, where change often does not happen in a linear or easily predictable way, and where coherent behaviour can emerge from interactions without top-down direction 43 . There are many examples of such systems in human societies, including cities, markets and political movements 44 . These systems can create “wicked problems”—such as the COVID-19 pandemic—where ideas of success are contested, changes are nonlinear and difficult to model, and policies have unintended consequences 45 .

This reality challenges the dominant behavioural science approach, which usually assumes stability over time, keeps a tight focus on predefined target behaviours and predicts linear effects on the basis of a predetermined theory of change 46 . The result, some argue, is a failure to understand how actors are acting and reacting in a complex system that leads policymakers to conclude they are being irrational—and then actually disrupt the system in misguided attempts to correct perceived biases or inefficiencies 47 , 48 , 49 .

These criticisms may overstate the case, but they point to a way forward. Behavioural science can be improved by using aspects of complexity thinking to offer new, credible and practical ways of addressing major policy issues. The first step is to reject crude distinctions of ‘upstream’ versus ‘downstream’ or the ‘individual frame’ versus the ‘system frame’ 50 . Instead, complex adaptive systems show that higher-level features of a system can actually emerge from the lower-level interactions of actors participating in the system 44 . When they become the governing features of the system, they then shape the lower-level behaviour until some other aspect emerges, and the fluctuations continue. An example might be the way that new coronavirus variants emerged in particular settings and then went on to change the course of the whole pandemic, requiring new overall strategic responses.

In other words, we are dealing with “cross-scale behaviours” 49 . For example, norms, rules, practices and culture itself can emerge from aggregated social interactions; these features then shape cognition and behavioural patterns in turn 51 . Recognizing cross-scale behaviours means that behavioural science could:

Identify “leverage points” where a specific shift in behaviour will produce wider system effects 52 . One option is to identify when and where tipping points are likely to occur in a system and then either nudge them to occur or not, depending on the policy goal 53 . For example, if even a subset of consumers decides to switch to a healthier version of a food product, this can have broader effects on a population’s health through the way the food system responds by restocking and product reformulation 54 .

Model the collective implications of individuals using simple heuristics to navigate a system. For example, new models show how small changes to simple heuristics that guide savings (in this case, how quickly households copy the savings behaviours of neighbours) can lead to the sudden emergence of inequalities in wealth 55 .

Find targeted changes to features of a system that create the conditions for wide-ranging shifts in behaviour to occur. For example, a core driver of social media behaviours is the ease with which information can be shared 46 . Even minor changes to this parameter can drive widespread changes—some have argued that such a change is what created the conditions leading to the Arab Spring, for example 56 .

This approach also suggests that a broader change in perspective is needed. We need to realize the flaws in launching interventions in isolation and then moving on when a narrowly defined goal has been achieved. Instead, we need to see the longer-term impact on a system of a collection of different policies with varying goals 57 . The best approach may be “system stewardship”, which focuses on creating the conditions for behaviours and indirectly steering adaptation towards overall goals 58 .

Of course, not every problem will involve a complex adaptive system; for simple issues, standard approaches to applying behavioural science work well. Behavioural scientists should therefore develop the skills to recognize the type of system that they are facing (see the system) and then choose their approach accordingly. These skills can be developed through agent-based simulations 59 , immersive technologies 60 or just basic checklists 61 .

Put randomized controlled trials in their place

Randomized controlled trials (RCTs) have been a core part of applied behavioural science, and they work well in relatively simple and stable contexts. But they can fare worse in complex adaptive systems, whose many shifting connections can make it difficult to keep a control group isolated and where a narrow focus on predetermined outcomes may neglect others that are important but difficult to predict 43 , 62 .

We can strengthen RCTs to deal better with complexity. We can try to gain a better understanding of the system interactions and anticipate how they may play out, perhaps through “dark logic” exercises that try to trace potential harms rather than just benefits 63 . For example, we might anticipate that sending parents text messages encouraging them to talk to their children about the school science curriculum may achieve this outcome at the expense of other school-supporting behaviours—as turned out to be the case 64 . Engaging the people who will implement and participate in an intervention will be a key part of this effort.

Another option is to set up RCTs to measure diffusion and contagion in networks, either by creating separate online environments or by randomizing real-world clusters, such as separate villages 65 , 66 . Finally, we can build feedback and adaptation into the design of the RCT and the intervention, allowing adjustments to changing conditions 67 , 68 . Options include using two-stage trial protocols 69 , evolutionary RCTs 70 , sequential multiple assignment randomized trials 71 and ‘bandit’ algorithms that identify high-performing interventions and allocate more people to them 72 .

Behavioural science can also be used to enhance alternative ways of measuring impacts—in particular, agent-based modelling, which tries to simulate the interactions between the different actors in a system 73 . The agents in these models are mostly assumed to be operating on rational choice principles 74 , 75 . There is therefore an opportunity to build in more evidence about the drivers of behaviour—for example, habits and social comparisons 49 .

Replication, variation and adaptation

The ‘replication crisis’ of the past decade has seen intense debate and concern about the reliability of behavioural science findings. Poor research practices were a major cause of the replication crisis; the good news is that many have improved as a result 76 , 77 . Now there are sharper incentives to preregister analysis plans, greater expectations that data and code will be freely shared, and wider acceptance of post-publication review of findings 78 .

Behavioural scientists need to secure and build on these advances to move towards a future where appropriately scoped meta-analyses of high-quality studies (including deliberate replications) are used to identify the most reliable interventions, develop an accurate sense of the likely size of their effects and avoid the weaker options. We have a responsibility to discard ideas if solid evidence now shows that they are shaky, and to offer a realistic view of what behavioural science can accomplish 18 .

That responsibility also requires us to have a hard conversation about heterogeneity in results: the complexity of human behaviour creates so much statistical noise that it is often hard to detect consistent signals and patterns 79 . The main drivers of heterogeneity are that contexts influence results and that the effect of an intervention may vary greatly between groups within a population 80 , 81 . For example, choices of how to set up experiments vary greatly between studies and researchers, in ways that often go unnoticed 82 . A recent study ran an experiment to measure the impact of these contextual factors. Participants were randomly allocated to studies designed by different research teams to test the same hypothesis. For four of the five research questions, studies actually produced effects in opposing directions. These “radically dispersed” results indicate that “idiosyncratic choices in stimulus design have a very large effect on observed results” 83 . These factors complicate the idea of replication itself: a ‘failed’ replication may not show that a finding was false but rather show how it exists under some conditions and not others 84 .

These challenges mean that applied behavioural scientists need to set a much higher bar for claiming that an effect holds true across many unspecified settings 85 . There is a growing sense that interventions should be talked about as hypotheses that were true in one place and that may need adapting to be true elsewhere 18 , 86 .

Narrative changes need to be complemented by specific proposals. The first concerns data collection: behavioural scientists should expand studies to include (and thus examine) a wider range of contexts and participants and gather richer data about them. To date, only a small minority of behavioural studies have provided enough information to see how effects vary 87 . Moreover, the gaps in data coverage may result from and create systemic issues in society: certain groups may be excluded or may have their data recorded differently from others 88 . Coordinated multi-site studies will be needed to collect enough data to explore heterogeneity systematically; crowdsourced studies offer particular promise for testing context and methods 83 . Realistically, this work is going to require a major investment in research infrastructure to set up standing panels of participants, coordinate between institutions, and reduce barriers to data collection and transfer 80 . These efforts cannot be limited to just a few countries.

Behavioural scientists also need to get better at judging how strongly an intervention’s results were linked to its context and therefore how much adaptation it needs 81 . We should use and modify frameworks from implementation science to develop such judgement 89 . Finally, we need to codify and cultivate the practical skills that successfully adapt interventions to new contexts; expertise in behavioural science should not be seen as simply knowing about concepts and findings in the abstract. It is therefore particularly valuable to learn from practitioners how they adapted specific interventions to new contexts. These accounts are starting to emerge, but they are still rare 18 , since researchers are incentivized to claim universality for their results rather than report and value contextual details 82 .

Beyond lists of biases

The heterogeneity in behavioural science findings also means that our underlying theories need to improve: we are lacking good explanations for why findings vary so much 84 . This need for better theories can be seen as part of a wider ‘theory crisis’ in psychology, which has thrown up two big concerns for behavioural science 90 , 91 .

The first stems from the fact that theories of behaviour often try to explain phenomena that are complex and wide-ranging 92 . If you are trying to show how emotion and cognition interact (for example), this involves many causes and interactions. Trying to cover this variability can produce descriptions of relationships and definitions of constructs that are abstract and imprecise 85 . The result is theories that are vague and weak, since they can be used to generate many different hypotheses—some of which may actually contradict each other 90 . That makes theories hard to disprove, and so weak theories stumble on, unimproved 93 .

The other concern is that theories can make specific predictions, but they are disconnected from each other—and from a deeper, general framework that can provide broader explanations (such as evolutionary theory) 94 . The main way this issue affects behavioural science is through heuristics and biases. Examples of individual biases are accessible, popular and how many people first encounter behavioural science. These ideas are incredibly useful, but they have often been presented as lists of standalone curiosities in a way that is incoherent, reductive and deadening. Presenting lists of biases does not help us to distinguish or organize them 95 , 96 , 97 . Such lists can also create overconfident thinking that targeting a specific bias (in isolation) will achieve a certain outcome 98 .

Perhaps most importantly, focusing on lists of biases distracts us from answering core underlying questions. When does one or another bias apply? Which are widely applicable, and which are highly specific? How does culture or life experience affect whether a bias influences behaviour or not 99 , 100 ? These are highly practical questions when one is faced with tasks such as taking an intervention to new places.

The concern for behavioural science is that it uses both these high-level frameworks (such as dual-process theories) and jumbled collections of heuristics and biases, with little in the middle to draw both levels together 94 . Recent years have seen valuable advances in connecting and systematizing theories 101 , 102 . At the same time, there are various ongoing attempts to create strong theories: “coherent and useful conceptual frameworks into which existing knowledge can be integrated” 93 (see also refs. 91 , 103 , 104 ). Naturally, such work should continue, but I think that applied behavioural science will benefit particularly from theories that are practical. By this I mean:

They fill the gap between day-to-day working hypotheses and comprehensive and systematic attempts to find universal underlying explanations.

They are based on data rather than being derived from pure theorizing 105 .

They can generate testable hypotheses, so they can be disproved 106 .

They specify the conditions under which a prediction applies or does not 85 .

They are geared towards realistic adaptation by practitioners and offer “actionable steps toward solving a problem that currently exists in a particular context in the real world” 107 .

Resource rationality may be a good example of a practical theory. It starts from the basis that people make rational use of their limited cognitive resources 108 . Given that there is a cost to thinking, people will look for solutions that balance choice quality with effort. Resource rationality can offer a “unifying framework for a wide range of successful models of seemingly unrelated phenomena and cognitive biases” that can be used to build models for how people act 108 .

A recent study has shown how these models not only can predict how people will respond to different kinds of nudges in certain contexts but also can be integrated with machine learning to create an automated method for constructing “optimal nudges” 109 . Such an approach could reveal new kinds of nudges and make creating them much more efficient. More reliable ways of developing personalized nudges are also possible. These are all highly practical benefits coming from applying a particular theory.

Predict and adjust

Hindsight bias is what happens when we feel ‘I knew it all along’, even if we did not 110 . When the results of an experiment come in, hindsight bias may mean that behavioural scientists are more likely to think that they had predicted them or quickly find ways of explaining why they occurred. Hindsight bias is a big problem because it breeds overconfidence, impedes learning, dissuades innovation and prevents us from understanding what is truly unexpected 111 , 112 .

In response, behavioural scientists should establish a standard practice of predicting the results of experiments and then receiving feedback on how their predictions performed. Hindsight bias can flourish if we do not systematically capture expectations or priors about what the results of a study will be 113 . Making predictions provides regular, clear feedback of the kind that is more likely to trigger surprise and reassessment rather than hindsight bias 114 . Establishing the average expert prediction—which may be different from the null hypothesis in an experiment—clearly reveals when results challenge the consensus 115 .

There are existing practices to build on here, such as the practice of preregistering hypotheses and trial protocols and the use of a Bayesian approach to make priors explicit. Indeed, more and more studies are explicitly integrating predictions 116 , 117 . However, barriers lie in the way of further progress. People may not welcome the ensuing challenge to their self-image, predicting may seem like one thing too many on the to-do list, and the benefits lie in the future. Some responses to these challenges are to make predicting easy by incorporating it into standard processes; minimize threats to predictors’ self-image (for example, by making and feeding back predictions anonymously) 118 ; give concrete prompts for learning and reflection, to disrupt the move from surprise to hindsight bias 119 ; and build learning from prediction within and between institutions.

Be humble, explore and enable

This proposal is made up of three connected ideas. First, behavioural scientists need to become more aware of the limits of their knowledge and to avoid fitting behaviours into pre-existing ideas around biases or irrationality. Second, they should broaden the exploratory work they conduct, in terms of gaining new types of qualitative data and recognizing how experiences vary by group and geography. Finally, they should develop new approaches to enable people to apply behavioural science themselves—and adopt new criteria for judging when these approaches are appropriate.

Humility is important because behavioural scientists (like other experts) may overconfidently rely on decontextualized principles that do not match the real-world setting for a behaviour 29 . Deeper inquiry can reveal reasonable explanations for what seem to be behavioural biases 120 . In response, those applying behavioural science should avoid using the term ‘irrationality’, which can limit attempts to understand actions in context; acknowledge that diagnoses of behaviour are provisional and incomplete (epistemic humility) 121 ; and design processes and institutions to counteract overconfidence 122 .

How do we conduct these deeper inquiries? Three areas demand particular focus in the future. First, pay greater attention to people’s goals and strategies and their own interpretations of their beliefs, feelings and behaviours 123 . Second, reach a wider range of experiences, including marginalized voices and communities, understanding how structural inequalities can lead to expectations and experiences varying greatly by group and geography 124 . Third, recognize how apparently universal cognitive processes are shaped by specific contexts, thereby unlocking new ways for behavioural science to engage with values and culture 125 , 126 . For example, one influential view of culture is that it influences action “not by providing the ultimate values toward which action is oriented but by shaping a repertoire or ‘toolkit’ of habits, skills, and styles” 127 . There are similarities here to the heuristics-and-biases toolkit perspective on behaviour: behavioural scientists could start explaining how and when certain parts of the toolkit become more or less salient.

More can and should be done to broaden ownership of behavioural science approaches. Many (but far from all) behavioural science applications have been top-down, with a choice architect enabling certain outcomes 8 , 128 . One route is to enable people to become more involved in designing interventions that affect them—and “nudge plus” 129 , “self-nudges” 130 and “boosts” 131 have been proposed as ways of doing this. Reliable criteria are needed to decide when enabling approaches may be appropriate, including whether the opportunity to use an enabling approach exists; ability and motivation; preferences; learning and setup costs; equity impacts; and effectiveness, recognizing that evidence on this point is still emerging 132 , 133 .

But these new approaches should not be seen simplistically as enabling alternatives to disempowering nudges 134 . Instead, we need to consider how far the person performing the behaviour is involved in shaping the initiative itself, as well as the level and nature of any capacity created by the intervention. People may be heavily engaged in selecting and developing a nudge intervention that nonetheless does not trigger any reflection or build any skills 135 . Alternatively, a policymaker may have paternalistically assumed that people want to build up their capacity to perform an action, when in fact they do not. This is the real choice to be made.

A final piece missing from current thinking is that enabling people can lead to a major decentring of the use of behavioural science. If more people are enabled to use behavioural science, they may decide to introduce interventions that influence others 136 . Rather than just creating self-nudges through altering their immediate environments, they may decide that wider system changes are needed instead. A range of people could be enabled to create nudges that generate positive societal change (with no central actors involved). This points towards a future where policy or product designers act less like (choice) architects and more like facilitators, brokers and partnership builders 137 .

Data science for equity

Recent years have seen growing interest in using new data science techniques to reliably analyse the heterogeneity of large datasets 138 , 139 . Machine learning is claimed to offer more sophisticated, reliable and data-driven ways of detecting meaningful patterns in datasets 140 , 141 . For example, a machine learning approach has been shown to be more effective than conventional segmentation approaches at analysing patterns of US household energy usage to reduce peak consumption 142 .

A popular idea is to use such techniques to better understand what works best for certain groups and thereby tailor an offering to them 143 . Scaling an intervention stops being about a uniform roll-out and instead becomes about presenting recipients with the aspects that are most effective for them 144 .

This vision is often presented as straightforward and obviously desirable, but it runs almost immediately into ethical quandaries and value judgements. People are unlikely to know what data have been used to target them and how; the specificity of the data involved may make manipulation more likely, since it may exploit sensitive personal vulnerabilities; and expectations of universality and non-discrimination in public services may be violated 143 , 145 .

Closely related to manipulation concerns is the fear that data science will open up new opportunities to exploit, rather than to help, the vulnerable 146 . One aspect is algorithmic bias. Models using data that reflect historical patterns of discrimination can produce results that reinforce these outcomes 147 . Since disadvantaged groups are more likely to be subject to the decisions of algorithms, there is a particular risk that inequalities will be perpetuated—although some studies argue that algorithms are actually less likely to be biased than human judgement 148 , 149 .

There is also emerging evidence that people often object to personalization. While they support some personalized services, they consistently oppose advertising that is customized on the basis of sensitive information—and they are generally against the collection of the information that personalization relies on 150 . To navigate this landscape, behavioural scientists need to examine four factors:

Who does the personalization target, and using what criteria? Many places have laws or norms to ensure equal treatment based on personal characteristics. When does personalization violate those principles?

How is the intervention constructed? To what extent do the recipients have awareness of the personalization, choice over whether it occurs, control over its level or nature, and the opportunity to give feedback on it 151 ?

When is it directed? Is it at a time when the participant is vulnerable? Would they probably regret it later, if they had time to reflect?

Why is personalization happening? Does it aim to exploit and harm or to support and protect, recognizing that those terms are often contested?

Taking these factors into account, I propose that the main opportunity is for data science to identify the ways in which an intervention or situation appears to increase inequalities, and reduce them 152 . For example, groups that are particularly likely to miss a filing requirement could be offered pre-emptive help. Algorithms can be used to better explain the causes of increased knee pain experienced in disadvantaged communities, thereby giving physicians better information to act on 153 .

I call this idea data science for equity. It addresses the ‘why’ factor by using data science to support, not exploit. ‘Data science for equity’ may seem like a platitude, but it is a very real choice: the combination of behavioural and data science is powerful and has been used to create harm in the past. Moreover, it needs to be complemented by attempts to increase agency (the ‘how’ factors), as in a recent study that showed how boosts can be used to help people to detect micro-targeting of advertising 154 , and studies that obtain more data on which uses of personalization people find acceptable.

No “view from nowhere”

The final proposal is one of the most wide-ranging, challenging and important. For the philosopher Thomas Nagel, the “view from nowhere” was an objective stance that allowed us to “transcend our particular viewpoint” 155 . Taking such a stance may not be possible for behavioural scientists. We bring certain assumptions and ways of seeing to what we do; we are always situated in, embedded in and entangled with ideas and situations 124 . We cannot assume that there is some set-aside position from which to observe the behaviour of others; no objective observation deck outside society exists 156 .

Behavioural scientists are defined by having knowledge, skills and education; many of them can use these resources to shape public and private actions. They are therefore in a privileged position, but they may not see the extent to which they hold elite positions that stop them from understanding people who think differently (for example, those who are sceptical of education) 157 . The danger is that elites place their group values and preferences on others, while thinking that they are adopting a view from nowhere 158 , 159 . This does not mean that they can never act or opine, but rather that they need to carefully understand their own positionality and those of others before doing so.

There have been repeated concerns that the field is still highly homogeneous in other ways as well. Gender, race, physical abilities, sexuality and geography also influence the viewpoints, practices and theories of behavioural scientists 160 , 161 . Only a quarter of the behavioural insights teams catalogued in a 2020 survey were based in the Global South 162 . An over-reliance on using English in cognitive science has led to the impact of language on thought being underestimated 163 . The past decade has shown how behaviours can vary greatly from culture to culture, even as psychology has tended to generalize from relatively small and unrepresentative samples 164 . Behavioural science studies often present data from Western, educated, industrialized, rich and democratic samples as more generalizable to humans as a whole 165 . So, rather than claiming that science is value-free, we need to find realistic ways of acknowledging and improving this reality 166 .

A starting point is for behavioural scientists to cultivate self-scrutiny by querying how their identities and experiences contribute to their stance on a topic. Hypothesis generation could particularly benefit from this exercise, since arguably it is closely informed by the researcher’s personal priorities and preferences 167 . Behavioural scientists could be actively reflecting on interventions in progress, including what factors are contributing to power dynamics 168 . Self-scrutiny may not be enough. We should also find more ways for people to judge researchers and decide whether they want to participate in research—going beyond consent forms. If they do participate, there are many opportunities to combine behavioural science with co-design 128 .

Finally, we should take actions to increase diversity (of several kinds) among behavioural scientists, teams, collaborations and institutions. Doing this requires addressing barriers such as the lack of professional networks connecting the Global North and Global South, and the time needed to build understanding of the tactics required to write successful grant applications from funders 169 . In many countries, much more could be done to increase the ethnic and racial diversity of the behavioural science field—for example, through support for starting and completing PhDs or through reducing the substantial racial gaps present in much public funding of research 170 , 171 .

Applied behavioural science has seen rapid growth and meaningful achievements over the past decade. Although the popularity of nudging provided its initial impetus, an ambition soon formed to apply a broader range of techniques to a wider range of goals. However, a set of credible critiques have emerged as levels of activity have grown. As Fig. 1 indicates, there are proposals that can address these critiques (and progress is already being made on some of them). When considered together, these proposals present a coherent vision for the scope, methods and values of applied behavioural science.

This vision is not limited to technical enhancements for the field; it also covers questions of epistemology, identity, politics and praxis. A common theme throughout the ten proposals is the need for self-reflective practice that is aware of how its knowledge and approaches have originated and how they are situated. In other words, a main priority for behavioural scientists is to recognize the various ways that their own behaviour is being shaped by structural, institutional, environmental and cognitive factors.

Realizing these proposals will require sustained work and experiencing the discomfort of disrupting what may have become familiar and comfortable practices. That is a particular problem because incentives for change are often weak or absent. Improving applied behavioural science has some characteristics of a social dilemma: the benefits are diffused across the field as a whole, while the costs fall on any individual party who chooses to act (or act first). Practitioners are often in competition. Academics often want to establish a distinctive research agenda. Commissioners are often rewarded for risk aversion. Impaired coordination is particularly problematic because coordination forms the basis for several necessary actions, such as the multi-site studies to measure heterogeneity.

Solving these problems will be hard. Funders need to find mechanisms that adequately reward coordination and collaboration by recognizing the true costs involved. Practitioners need to perceive the competitive advantages of adopting new practices and be able to communicate them to clients. Clients themselves need to have a realistic sense of what can be achieved but still be motivated to commit resources. Stepping back, the starting point for addressing these barriers needs to be a change in the narrative about what the field does and could do—a new set of ambitions to aim for. This manifesto aims to help to shape such a narrative.

Straßheim, H. The rise and spread of behavioral public policy: an opportunity for critical research and self-reflection. Int. Rev. Public Policy 2 , 115–128 (2020).

Article   Google Scholar  

Gilovich, T., Griffin, D. & Kahneman, D. (eds) Heuristics and Biases: The Psychology of Intuitive Judgment (Cambridge Univ. Press, 2002).

Lieberman, M. D. Social cognitive neuroscience: a review of core processes. Annu. Rev. Psychol. 58 , 259–289 (2007).

Article   PubMed   Google Scholar  

Ajzen, I. The theory of planned behaviour. Organ. Behav. Hum. Decis. Process. 50 , 179–211 (1991).

Thaler, R. Misbehaving: The Making of Behavioral Economics (W.W. Norton, 2015).

Thaler, R. Mental accounting and consumer choice. Mark. Sci. 4 , 199–214 (1985).

Pykett, J., Jones, R. & Whitehead, M. (eds) Psychological Governance and Public Policy: Governing the Mind, Brain and Behaviour (Routledge, 2017).

Thaler, R. & Sunstein, S. Nudge: Improving Decisions about Health, Wealth, and Happiness (Yale Univ. Press, 2008).

Hallsworth, M. & Kirkman, E. Behavioral Insights (MIT Press, 2020).

Oliver, A. The Origins of Behavioural Public Policy (Cambridge Univ. Press, 2017).

Straßheim, H. & Beck, S. in Handbook of Behavioural Change and Public Policy (eds Beck, S. & Straßheim, H.) 1–22 (Edward Elgar, 2019).

Ball, S. & Feitsma, J. The boundaries of behavioural insights: observations from two ethnographic studies. Evid. Policy 16 , 559–577 (2020).

Afif, Z., Islan, W., Calvo-Gonzalez, O. & Dalton, A. Behavioural Science Around the World: Profiles of 10 Countries (World Bank, 2018).

Science that can change the world. Nat. Hum. Behav . https://doi.org/10.1038/s41562-019-0642-2 (2019).

Behavioural Insights and New Approaches to Policy Design: The Views from the Field (OECD, 2015).

Behavioural Insights and Public Policy: Lessons from Around the World (OECD, 2015).

Feitsma, J. N. P. The behavioural state: critical observations on technocracy and psychocracy. Policy Sci. 51 , 387–410 (2018).

Article   PubMed   PubMed Central   Google Scholar  

Mažar, N. & Soman, D. (eds) Behavioral Science in the Wild (Univ. of Toronto Press, 2022).

Oliver, A. Towards a new political economy of behavioral public policy. Public Adm. Rev. 79 , 917–924 (2019).

Grimmelikhuijsen, S., Jilke, S., Olsen, A. L. & Tummers, L. Behavioral public administration: combining insights from public administration and psychology. Public Adm. Rev. 77 , 45–56 (2017).

Cadario, R. & Chandon, P. Which healthy eating nudges work best? A meta-analysis of field experiments. Mark. Sci. 39 , 465–486 (2020).

Damgaard, M. T. & Nielsen, H. S. Nudging in education. Econ. Educ. Rev. 64 , 313–342 (2018).

Ferrari, L., Cavaliere, A., De Marchi, E. & Banterle, A. Can nudging improve the environmental impact of food supply chain? A systematic review. Trends Food Sci. Technol. 91 , 184–192 (2019).

Article   CAS   Google Scholar  

Fishbane, A., Ouss, A. & Shah, A. K. Behavioral nudges reduce failure to appear for court. Science 370 , eabb6591 (2020).

Article   CAS   PubMed   Google Scholar  

Mertens, S., Herberz, M., Hahnel, U. J. & Brosch, T. The effectiveness of nudging: a meta-analysis of choice architecture interventions across behavioral domains. Proc. Natl Acad. Sci. USA 119 , e2107346118 (2022).

Maier, M. et al. No evidence for nudging after adjusting for publication bias. Proc. Natl Acad. Sci. USA 119 , e2200300119 (2022).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Levitt, S. D. & List, J. A. What do laboratory experiments measuring social preferences reveal about the real world? J. Econ. Perspect. 21 , 153–174 (2007).

Hansen, P. G. Tools and Ethics for Applied Behavioural Insights: The BASIC Toolkit (OECD, 2019).

Schmidt, R. & Stenger, K. Behavioral brittleness: the case for strategic behavioral public policy. Behav. Public Policy , https://doi.org/10.1017/bpp.2021.16 (2021).

United Nations Behavioural Science Report https://www.uninnovation.network/assets/BeSci/UN_Behavioural_Science_Report_2021.pdf (United Nations Innovation Network, 2021).

Hansen, P. G. What are we forgetting? Behav. Public Policy 2 , 190–197 (2018).

Van Rooij, B. & Fine, A. The Behavioral Code: The Hidden Ways the Law Makes Us Better or Worse (Beacon, 2021).

Andre, P., Haaland, I., Roth, C. & Wohlfart, J. Narratives about the Macroeconomy CEBI Working Paper Series (Center for Economic Behavior and Inequality, 2021).

Dolan, P., Hallsworth, M., Halpern, D., King, D. & Vlaev, I. MINDSPACE: Influencing Behaviour Through Public Policy (Institute for Government and Cabinet Office, 2010).

Meder, B., Fleischhut, N. & Osman, M. Beyond the confines of choice architecture: a critical analysis. J. Econ. Psychol. 68 , 36–44 (2018).

Soman, D. & Yeung, C. (eds) The Behaviorally Informed Organization (Univ. of Toronto Press, 2020).

Thibodeau, P. H. & Boroditsky, L. Metaphors we think with: the role of metaphor in reasoning. PLoS ONE 6 , e16782 (2011).

Feitsma, J. Brokering behaviour change: the work of behavioural insights experts in government. Policy Polit. 47 , 37–56 (2019).

Battaglio, R. P. Jr, Belardinelli, P., Bellé, N. & Cantarelli, P. Behavioral public administration ad fontes : a synthesis of research on bounded rationality, cognitive biases, and nudging in public organizations. Public Adm. Rev. 79 , 304–320 (2019).

Cantarelli, P., Bellé, N. & Belardinelli, P. Behavioral public HR: experimental evidence on cognitive biases and debiasing interventions. Rev. Public Pers. Adm. 40 , 56–81 (2020).

Mayer, S., Shah, R. & Kalil, A. in The Scale-Up Effect in Early Childhood and Public Policy: Why Interventions Lose Impact at Scale and What We Can Do about It (eds List, J. et al.) 41–57 (Routledge, 2021).

Schmidt, R. A model for choice infrastructure: looking beyond choice architecture in behavioral public policy. Behav. Public Policy , https://doi.org/10.1017/bpp.2021.44 (2022).

Magenta Book 2020: Supplementary Guide: Handling Complexity in Policy Evaluation (HM Treasury, 2020).

Boulton, J. G., Allen, P. M. & Bowman, C. Embracing Complexity: Strategic Perspectives for an Age of Turbulence (Oxford Univ. Press, 2015).

Angeli, F., Camporesi, S. & Dal Fabbro, G. The COVID-19 wicked problem in public health ethics: conflicting evidence, or incommensurable values? Hum. Soc. Sci. Commun. 8 , 1–8 (2021).

Google Scholar  

Bak-Coleman, J. B. et al. Stewardship of global collective behavior. Proc. Natl Acad. Sci. USA 118 , e2025764118 (2021).

Dunlop, C. A. & Radaelli, C. M. in Nudge and the Law: A European Perspective (eds Alemanno, A. & Simony, A. L.) 139–158 (Hart, 2015).

Scott, J. C. Seeing Like a State (Yale Univ. Press, 1998).

Schill, C. et al. A more dynamic understanding of human behaviour for the Anthropocene. Nat. Sustain. 2 , 1075–1082 (2019).

Chater, N. & Loewenstein, G. The i-Frame and the s-Frame: how focusing on individual-level solutions has led behavioral public policy astray. Behav. Brain Sci. https://doi.org/10.1017/S0140525X22002023 (2022).

DiMaggio, P. & Markus, H. R. Culture and social psychology: converging perspectives. Soc. Psychol. Q. 73 , 347–352 (2010).

Abson, D. J. et al. Leverage points for sustainability transformation. Ambio 46 , 30–39 (2017).

Andreoni, J., Nikiforakis, N. & Siegenthaler, S. Predicting social tipping and norm change in controlled experiments. Proc. Natl Acad. Sci. USA 118 , e2014893118 (2021).

Hallsworth, M. Rethinking public health using behavioural science. Nat. Hum. Behav. 1 , 612 (2017).

Asano, Y. M., Kolb, J. J., Heitzig, J. & Farmer, J. D. Emergent inequality and business cycles in a simple behavioral macroeconomic model. Proc. Natl Acad. Sci. USA 118 , e2025721118 (2021).

Jones-Rooy, A. & Page, S. E. The complexity of system effects. Crit. Rev. 24 , 313–342 (2012).

Hawe, P., Shiell, A. & Riley, T. Theorising interventions as events in systems. Am. J. Community Psychol. 43 , 267–276 (2009).

Hallsworth, M. System Stewardship: The Future of Policymaking? (Institute for Government, 2011).

Rates, C. A., Mulvey, B. K., Chiu, J. L. & Stenger, K. Examining ontological and self-monitoring scaffolding to improve complex systems thinking with a participatory simulation. Instr. Sci. 50 , 199–221 (2022).

Fernandes, L., Morgado, L., Paredes, H., Coelho, A. & Richter, J. Immersive learning experiences for understanding complex systems. In iLRN 2019 London-Workshop, Long and Short Paper, Poster, Demos, and SSRiP Proceedings from the Fifth Immersive Learning Research Network Conference 107–113 http://hdl.handle.net/10400.2/8368 (Verlag der Technischen Universität Graz, 2019).

Annex 1 Checklist for Assessing the Level of Complexity of a Program (International Initiative for Impact Evaluation), https://www.3ieimpact.org/sites/default/files/2021-07/complexity-blg-Annex1-Checklist_assessing_level_complexity.pdf (2021).

Deaton, A. & Cartwright, N. Understanding and misunderstanding randomized controlled trials. Soc. Sci. Med. 210 , 2–21 (2018).

Bonell, C., Jamal, F., Melendez-Torres, G. J. & Cummins, S. ‘Dark logic’: theorising the harmful consequences of public health interventions. J. Epidemiol. Community Health 69 , 95–98 (2015).

Robinson, C. D., Chande, R., Burgess, S. & Rogers, T. Parent engagement interventions are not costless: opportunity cost and crowd out of parental investment. Educ. Eval. Policy Anal. 44 , 170–177 (2021).

Centola, D. How Behaviour Spreads: The Science of Complex Contagions (Princeton Univ. Press, 2018).

Kim, D. A. et al. Social network targeting to maximise population behaviour change: a cluster randomised controlled trial. Lancet 386 , 145–153 (2015).

Berry, D. A. Bayesian clinical trials. Nat. Rev. Drug Discov. 5 , 27–36 (2006).

Marinelli, H. A., Berlinski, S. & Busso, M. Remedial education: evidence from a sequence of experiments in Colombia. J. Hum. Resour . 0320-10801R2 (2021).

Anders, J., Groot, B. & Heal, J. Running RCTs with complex interventions. The Behavioural Insights Team https://www.bi.team/blogs/running-rcts-with-complex-interventions/ (1 November 2017).

Volpp, K. G., Terwiesch, C., Troxel, A. B., Mehta, S. & Asch, D. A. Making the RCT more useful for innovation with evidence-based evolutionary testing. Healthcare 1 , 4–7 (2013).

Kidwell, K. M. & Hyde, L. W. Adaptive interventions and SMART designs: application to child behavior research in a community setting. Am. J. Eval. 37 , 344–363 (2016).

Caria, S., Kasy, M., Quinn, S., Shami, S. & Teytelboym, A. An Adaptive Targeted Field Experiment: Job Search Assistance for Refugees in Jordan. Warwick Economics Research Papers No. 1335 (2021).

The Complexity Evaluation Toolkit v.1.0, https://www.cecan.ac.uk/wp-content/uploads/2020/08/EPPN-No-03-Agent-Based-Modelling-for-Evaluation.pdf (CECAN, 2021).

Schluter, M. et al. A framework for mapping and comparing behavioural theories in models of social–ecological systems. Ecol. Econ. 131 , 21–35 (2017).

Wijermans, N., Boonstra, W. J., Orach, K., Hentati-Sundberg, J. & Schlüter, M. Behavioural diversity in fishing—towards a next generation of fishery models. Fish Fish. 21 , 872–890 (2020).

Simmons, J. P., Nelson, L. D. & Simonsohn, U. False-positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychol. Sci. 22 , 1359–1366 (2011).

Nelson, L. D., Simmons, J. & Simonsohn, U. Psychology’s Renaissance. Annu. Rev. Psychol. 69 , 511–534 (2018).

Frias-Navarro, D., Pascual-Llobell, J., Pascual-Soler, M., Perezgonzalez, J. & Berrios-Riquelme, J. Replication crisis or an opportunity to improve scientific production? Eur. J. Educ. 55 , 618–631 (2020).

Stanley, T. D., Carter, E. C. & Doucouliagos, H. What meta-analyses reveal about the replicability of psychological research. Psychol. Bull. 144 , 1325–1346 (2018).

Bryan, C. J., Tipton, E. & Yeager, D. S. Behavioural science is unlikely to change the world without a heterogeneity revolution. Nat. Hum. Behav. 5 , 980–989 (2021).

Van Bavel, J. J., Mende-Siedlecki, P., Brady, W. J. & Reinero, D. A. Contextual sensitivity in scientific reproducibility. Proc. Natl Acad. Sci. USA 113 , 6454–6459 (2016).

Brenninkmeijer, J., Derksen, M., Rietzschel, E., Vazire, S. & Nuijten, M. Informal laboratory practices in psychology. Collabra Psychol . 5, 45 (2019).

Landy, J. F. et al. Crowdsourcing hypothesis tests: making transparent how design choices shape research results. Psychol. Bull. 146 , 451 (2020).

McShane, B. B., Tackett, J. L., Böckenholt, U. & Gelman, A. Large-scale replication projects in contemporary psychological research. Am. Stat. 73 , 99–105 (2019).

Sanbonmatsu, D. M., Cooley, E. H. & Butner, J. E. The impact of complexity on methods and findings in psychological science. Front. Psychol. 11 , 580111 (2021).

Cartwright, N. & Hardie, J. Evidence-Based Policy: A Practical Guide to Doing It Better (Oxford Univ. Press, 2012).

Yeager, D. To change the world, behavioral intervention research will need to get serious about heterogeneity. OSF https://osf.io/zuh93/ (2020).

Snow, T. Mind the gap between the truth and data. Nesta https://www.nesta.org.uk/blog/mind-gap-between-truth-and-data/ (9 October 2019).

Damschroder, L. J. et al. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement. Sci. 4 , 50 (2009).

Oberauer, K. & Lewandowsky, S. Addressing the theory crisis in psychology. Psychon. Bull. Rev. 26 , 1596–1618 (2019).

Borsboom, D., van der Maas, H. L., Dalege, J., Kievit, R. A. & Haig, B. D. Theory construction methodology: a practical framework for building theories in psychology. Perspect. Psychol. Sci. 16 , 756–766 (2021).

Sanbonmatsu, D. M. & Johnston, W. A. Redefining science: the impact of complexity on theory development in social and behavioral research. Perspect. Psychol. Sci. 14 , 672–690 (2019).

Fried, E. I. Theories and models: what they are, what they are for, and what they are about. Psychol. Inq. 31 , 336–344 (2020).

Muthukrishna, M. & Henrich, J. A problem in theory. Nat. Hum. Behav. 3 , 221–229 (2019).

Schimmelpfennig, R. & Muthukrishna, M. Cultural evolutionary behavioural science in public policy. Behav. Public Policy https://doi.org/10.1017/bpp.2022.40 (2023)

Kwan, V. S., John, O. P., Kenny, D. A., Bond, M. H. & Robins, R. W. Reconceptualizing individual differences in self-enhancement bias: an interpersonal approach. Psychol. Rev. 111 , 94 (2004).

Mezulis, A. H., Abramson, L. Y., Hyde, J. S. & Hankin, B. L. Is there a universal positivity bias in attributions? A meta-analytic review of individual, developmental, and cultural differences in the self-serving attributional bias. Psychol. Bull. 130 , 711 (2004).

Smets, K. There is more to behavioral economics than biases and fallacies. Behavioral Scientist http://behaviouralscientist.org/there-is-more-to-behavioural-science-than-biases-and-fallacies/ (24 July 2018).

Rand, D. G. Cooperation, fast and slow: meta-analytic evidence for a theory of social heuristics and self-interested deliberation. Psychol. Sci. 27 , 1192–1206 (2016).

Gelfand, M. J. Rule Makers, Rule Breakers: How Tight and Loose Cultures Wire Our World (Constable & Robinson, 2018).

West, R. et al. Development of a formal system for representing behaviour-change theories. Nat. Hum. Behav. 3 , 526–536 (2019).

Hale, J. et al. An ontology-based modelling system (OBMS) for representing behaviour change theories applied to 76 theories. Wellcome Open Res 5 , 177 (2020).

van Rooij, I. & Baggio, G. Theory before the test: how to build high-verisimilitude explanatory theories in psychological science. Perspect. Psychol. Sci. 16 , 682–697 (2021).

Smaldino, P. E. How to build a strong theoretical foundation. Psychol. Inq. 31 , 297–301 (2020).

Abner, G. B., Kim, S. Y. & Perry, J. L. Building evidence for public human resource management: using middle range theory to link theory and data. Rev. Public Pers. Adm. 37 , 139–159 (2017).

Moore, L. F., Johns, G. & Pinder, C. C. in Middle Range Theory and the Study of Organizations (eds Pinder, C. C. & Moore, L. F.) 1–16 (Martinus Nijhoff, 1980).

Berkman, E. T. & Wilson, S. M. So useful as a good theory? The practicality crisis in (social) psychological theory. Perspect. Psychol. Sci. 16 , 864–874 (2021).

Lieder, F. & Griffiths, T. L. Resource-rational analysis: understanding human cognition as the optimal use of limited computational resources. Behav. Brain Sci. 43 , e1 (2020).

Callaway, F., Hardy, M. & Griffiths, T. Optimal nudging for cognitively bounded agents: a framework for modeling, predicting, and controlling the effects of choice architectures. Preprint at https://doi.org/10.31234/osf.io/7ahdc (2022).

Roese, N. J. & Vohs, K. D. Hindsight bias. Perspect. Psychol. Sci. 7 , 411–426 (2012).

Henriksen, K. & Kaplan, H. Hindsight bias, outcome knowledge and adaptive learning. Qual. Saf. Health Care 12 , ii46–ii50 (2003).

Bukszar, E. & Connolly, T. Hindsight bias and strategic choice: some problems in learning from experience. Acad. Manage. J. 31 , 628–641 (1988).

DellaVigna, S., Pope, D. & Vivalt, E. Predict science to improve science. Science 366 , 428–429 (2019).

Munnich, E. & Ranney, M. A. Learning from surprise: harnessing a metacognitive surprise signal to build and adapt belief networks. Top. Cogn. Sci. 11 , 164–177 (2019).

PubMed   Google Scholar  

Deshpande, M. & Dizon-Ross, R. The (Lack of) Anticipatory Effects of the Social Safety Net on Human Capital Investment Working Paper, https://faculty.chicagobooth.edu/-/media/faculty/rebecca-dizon-ross/research/ssi_rct.pdf (Chicago Booth, 2022).

DellaVigna, S. & Linos, E. RCTs to scale: comprehensive evidence from two nudge units. Econometrica 90 , 81–116 (2022).

Dimant, E., Clemente, E. G., Pieper, D., Dreber, A. & Gelfand, M. Politicizing mask-wearing: predicting the success of behavioral interventions among Republicans and Democrats in the U.S. Sci. Rep. 12 , 7575 (2022).

Ackerman, R., Bernstein, D. M. & Kumar, R. Metacognitive hindsight bias. Mem. Cogn. 48 , 731–744 (2020).

Pezzo, M. Surprise, defence, or making sense: what removes hindsight bias? Memory 11 , 421–441 (2003).

Dorison, C. A. & Heller, B. H. Observers penalize decision makers whose risk preferences are unaffected by loss–gain framing. J. Exp. Psychol. 151 , 2043–2059 (2022).

Porter, T. et al. Predictors and consequences of intellectual humility. Nat. Rev. Psychol. 1 , 524–536 (2022).

Egan, M., Hallsworth, M., McCrae, J. & Rutter, J. Behavioural Government: Using Behavioural Science to Improve How Governments Make Decisions (Behavioural Insights Team, 2018).

Walton, G. M. & Wilson, T. D. Wise interventions: psychological remedies for social and personal problems. Psychol. Rev. 125 , 617 (2018).

Lewis, N. A. Jr What counts as good science? How the battle for methodological legitimacy affects public psychology. Am. Psychol. 76 , 1323 (2021).

Lamont, M., Adler, L., Park, B. Y. & Xiang, X. Bridging cultural sociology and cognitive psychology in three contemporary research programmes. Nat. Hum. Behav. 1 , 866–872 (2017).

Vaisey, S. Motivation and justification: a dual-process model of culture in action. Am. J. Sociol. 114 , 1675–1715 (2009).

Swidler, A. Culture in action: symbols and strategies. Am. Sociol. Rev. 51 , 273–286 (1986).

Richardson, L. & John, P. Co-designing behavioural public policy: lessons from the field about how to ‘nudge plus’. Evid. Policy 17 , 405–422 (2021).

Banerjee, S. & John, P. Nudge plus: incorporating reflection into behavioral public policy. Behav. Public Policy , https://doi.org/10.1017/bpp.2021.6 (2021).

Reijula, S. & Hertwig, R. Self-nudging and the citizen choice architect. Behav. Public Policy 6 , 119–149 (2022).

Hertwig, R. & Grüne-Yanoff, T. Nudging and boosting: steering or empowering good decisions. Perspect. Psychol. Sci. 12 , 973–986 (2017).

Hertwig, R. When to consider boosting: some rules for policy-makers. Behav. Public Policy 1 , 143–161 (2017).

Grüne-Yanoff, T., Marchionni, C. & Feufel, M. A. Toward a framework for selecting behavioural policies: how to choose between boosts and nudges. Econ. Phil. 34 , 243–266 (2018).

Sims, A. & Müller, T. M. Nudge versus boost: a distinction without a normative difference. Econ. Phil. 35 , 195–222 (2019).

Sunstein, C. R. Choosing Not to Choose: Understanding the Value of Choice (Oxford Univ. Press, 2015).

Miller, G. A. Psychology as a means of promoting human welfare. Am. Psychol. 24 , 1063 (1969).

Bason, C. Leading Public Sector Innovation: Co-creating for a Better Society (Policy Press, 2018).

Big-data studies of human behaviour need a common language. Nature 595 , 149–150 (2021).

Yarkoni, T. & Westfall, J. Choosing prediction over explanation in psychology: lessons from machine learning. Perspect. Psychol. Sci. 12 , 1100–1122 (2017).

Künzel, S. R., Sekhon, J. S., Bickel, P. J. & Yu, B. Metalearners for estimating heterogeneous treatment effects using machine learning. Proc. Natl Acad. Sci. USA 116 , 4156–4165 (2019).

Wager, S. & Athey, S. Estimation and inference of heterogeneous treatment effects using random forests. J. Am. Stat. Assoc. 113 , 1228–1242 (2018).

Todd-Blick, A. et al. Winners are not keepers: characterizing household engagement, gains, and energy patterns in demand response using machine learning in the United States. Energy Res. Soc. Sci. 70 , 101595 (2020).

Mills, S. Personalized nudging. Behav. Public Policy 6 , 150–159 (2022).

Soman, D. & Hossain, T. Successfully scaled solutions need not be homogenous. Behav. Public Policy 5 , 80–89 (2021).

Möhlmann, M. Algorithmic nudges don’t have to be unethical. Harvard Business Review, 22 April (2021).

Susser, D., Roessler, B., & Nissenbaum, H. Online manipulation: hidden influences in a digital world. 4 Georget. Law Technol. Rev. 1 (2019).

Abbasi, M., Fridler, A., Schneidegger, C. & Venkatasubramanian, S. Fairness in representation: quantifying stereotyping as a representational harm. In SIAM International Conference on Data Mining , SDM 2019, (eds. Berger-Wolf, T. & Chawla, N.) 801–809 (Society for Industrial and Applied Mathematics, 2019).

Eubanks, V. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (St. Martin’s, 2018).

Obermeyer, Z., Powers, B., Vogeli, C. & Mullainathan, S. Dissecting racial bias in an algorithm used to manage the health of populations. Science 366 , 447–453 (2019).

Kozyreva, A., Lorenz-Spreen, P., Hertwig, R., Lewandowsky, S. & Herzog, S. M. Public attitudes towards algorithmic personalization and use of personal data online: evidence from Germany, Great Britain, and the United States. Hum. Soc. Sci. Commun. 8 , 117 (2021).

Kotamarthi, P. This is personal: the do’s and don’ts of personalization in tech. Decision Lab https://thedecisionlab.com/insights/technology/this-is-personal-the-dos-and-donts-of-personalization-in-tech (2022).

Matz, S. C., Kosinski, M., Nave, G. & Stillwell, D. J. Psychological targeting as an effective approach to digital mass persuasion. Proc. Natl Acad. Sci. USA 114 , 12714–12719 (2017).

Pierson, E., Cutler, D. M., Leskovec, J., Mullainathan, S. & Obermeyer, Z. An algorithmic approach to reducing unexplained pain disparities in underserved populations. Nat. Med. 27 , 136–140 (2021).

Lorenz-Spreen, P. et al. Boosting people’s ability to detect microtargeted advertising. Sci. Rep. 11 , 15541 (2021).

Nagel, T. The View from Nowhere (Oxford Univ. Press, 1986).

Sugden, R. The behavioural economist and the social planner: to whom should behavioural welfare economics be addressed? Inquiry 56 , 519–538 (2013).

Liscow, Z. D. & Markovits, D. Democratizing behavioural economics. Yale J. Regul. 39 , 1217–1290 (2022).

Bergman, P., Lasky-Fink, J. & Rogers, T. Simplification and defaults affect adoption and impact of technology, but decision makers do not realize it. Organ. Behav. Hum. Decis. Process. 158 , 66–79 (2020).

Pereira, M. M. Understanding and reducing biases in elite beliefs about the electorate. Am. Polit. Sci. Rev. 115 , 1308–1324 (2021).

Roberts, S. O., Bareket-Shavit, C., Dollins, F. A., Goldie, P. D. & Mortenson, E. Racial inequality in psychological research: trends of the past and recommendations for the future. Perspect. Psychol. Sci. 15 , 1295–1309 (2020).

Lepenies, R. & Małecka, M. in Handbook of Behavioural Change and Public Policy (eds Beck, S. & Straßheim, H.) 344–360 (Edward Elgar, 2019).

Common Thread. From Idea to Immunization:A Blueprint to Building a BI Unit in the Global South https://gocommonthread.com/work/global-gavi/bi (2022).

Blasi, D. E., Henrich, J., Adamou, E., Kemmerer, D. & Majid, A. Over-reliance on English hinders cognitive science. Trends Cogn. Sci. 26 , 1153–1170 (2022).

Henrich, J., Heine, S. J. & Norenzayan, A. The weirdest people in the world? Behav. Brain Sci. 33 , 61–83 (2010).

Cheon, B. K., Melani, I. & Hong, Y. Y. How USA-centric is psychology? An archival study of implicit assumptions of generalizability of findings to human nature based on origins of study samples. Soc. Psychol. Pers. Sci. 11 , 928–937 (2020).

Dupree, C. H. & Kraus, M. W. Psychological science is not race neutral. Perspect. Psychol. Sci. 17 , 270–275 (2022).

Mullainathan, S. Keynote address to the Society of Judgment and Decision Making Annual Conference (2022).

A Guidebook for Community Organizations, Researchers, and Funders to Help Us Get from Insufficient Understanding to More Authentic Truth https://chicagobeyond.org/researchequity/ (Chicago Beyond, 2018).

Asman, S., Casarotto, C., Duflo, A. & Rajkotia, R. Locally-grounded research: strengthening partnerships to advance the science and impact of development research. Innovations for Poverty Action https://www.poverty-action.org/blog/locally-grounded-research-strengthening-partnerships-advance-science-and-impact-development (28 September 2021).

The PhD Project, https://phdproject.org/ (PhD Project, accessed 9 December 2022).

Erosheva, E. A. et al. NIH peer review: criterion scores completely account for racial disparities in overall impact scores. Sci. Adv. 6 , eaaz4868 (2020).

Marteau, T. M. et al. Judging nudging: can nudging improve population health? BMJ 342 , d228 (2011).

Lambe, F. et al. Embracing complexity: a transdisciplinary conceptual framework for understanding behavior change in the context of development-focused interventions. World Dev. 126 , 104703 (2020).

Shrout, P. E. & Rodgers, J. L. Psychology, science, and knowledge construction: broadening perspectives from the replication crisis. Annu. Rev. Psychol. 69 , 487–510 (2018).

IJzerman, H. et al. Use caution when applying behavioural science to policy. Nat. Hum. Behav. 4 , 1092–1094 (2020).

Grüne-Yanoff, T. Old wine in new casks: libertarian paternalism still violates liberal principles. Soc. Choice Welf. 38 , 635–645 (2012).

Rizzo, M. J. & Whitman, G. Escaping Paternalism: Rationality, Behavioral Economics, and Public Policy (Cambridge Univ. Press, 2020).

Ewert, B. Moving beyond the obsession with nudging individual behaviour: towards a broader understanding of behavioural public policy. Public Policy Adm. 35 , 337–360 (2020).

Leggett, W. The politics of behaviour change: nudge, neoliberalism and the state. Policy Polit. 42 , 3–19 (2014).

Download references


I thank L. Tublin for her editorial support. I also thank S. Banerjee, E. Berkman, A. Buttenheim, F. Callaway, J. Collins, J. Doctor, A. Gyani, D. Halpern, P. John, T. Marteau, M. Muthukrishna, D. Perera, D. Perrott, K. Ruggeri, R. Schmidt, D. Soman, H. Strassheim, C. Sunstein and members of the Behavioural Insights Team for their feedback on previous drafts.

Author information

Authors and affiliations.

Behavioural Insights Team, Brooklyn, NY, USA

Michael Hallsworth

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Michael Hallsworth .

Ethics declarations

Competing interests.

The author is the managing director, Americas, at the Behavioural Insights Team, which provides consultancy services in behavioural science.

Peer review

Peer review information.

Nature Human Behaviour thanks Peter John and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Cite this article.

Hallsworth, M. A manifesto for applying behavioural science. Nat Hum Behav 7 , 310–322 (2023). https://doi.org/10.1038/s41562-023-01555-3

Download citation

Received : 26 September 2022

Accepted : 10 February 2023

Published : 20 March 2023

Issue Date : March 2023

DOI : https://doi.org/10.1038/s41562-023-01555-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

How can a behavioral economics lens contribute to implementation science.

  • Nathan Hodson
  • Byron J. Powell
  • Rinad S. Beidas

Implementation Science (2024)

Field testing the transferability of behavioural science knowledge on promoting vaccinations

  • Silvia Saccardo
  • Hengchen Dai
  • Jeffrey Fujimoto

Nature Human Behaviour (2024)

Achieving transformational change through the consilience of behavioral science and radical alternatives

  • Daniel J. Read
  • Matthew J. Selinske

Sustainability Science (2024)

Ethical Considerations When Using Nudges to Reduce Meat Consumption: an Analysis Through the FORGOOD Ethics Framework

Journal of Consumer Policy (2024)

One size doesn’t fit all: methodological reflections in conducting community-based behavioural science research to tailor COVID-19 vaccination initiatives for public health priority populations

  • Guillaume Fontaine
  • Maureen Smith
  • Justin Presseau

BMC Public Health (2024)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

case study behavioral science


  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Behavioral science

  • Health and behavioral science
  • Health and wellness
  • Psychology and neuroscience

case study behavioral science

Self-Compassion Will Make You a Better Leader

  • Rich Fernandez
  • Steph Stern
  • November 09, 2020

case study behavioral science

How to Design an Ethical Organization

  • Nicholas Epley
  • From the May–June 2019 Issue

case study behavioral science

How to Thrive When Everything Feels Terrible

  • Christine Porath
  • Mike Porath
  • October 30, 2020

case study behavioral science

Embrace Ambivalence When Making Big Career Decisions

  • Brianna Barker Caza
  • Naomi B. Rothman
  • Naomi Rothman
  • Jamie R Strassman
  • Brittany Lambert
  • November 23, 2022

case study behavioral science

Case Study: How Aggressively Should a Bank Pursue AI?

  • Thomas H. Davenport
  • George Westerman
  • From the May–June 2024 Issue

case study behavioral science

Harnessing the Science of Persuasion

  • Robert B. Cialdini
  • From the October 2001 Issue

case study behavioral science

Teams Solve Problems Faster When They’re More Cognitively Diverse

  • Alison Reynolds
  • David Lewis
  • March 30, 2017

case study behavioral science

How to Stop Procrastinating

  • Alice Boyes
  • From the May–June 2022 Issue

case study behavioral science

What Can Business Learn from Art?

  • Scott Berinato

case study behavioral science

Mindfulness Mitigates Biases You May Not Know You Have

  • Nicole Torres
  • December 24, 2014

case study behavioral science

To Inspire Your Team, Share More of Yourself

  • September 29, 2021

Reclaim Your Creative Confidence

  • David Kelley
  • From the December 2012 Issue

case study behavioral science

The Dark Side of Resilience

  • Tomas Chamorro-Premuzic
  • August 16, 2017

case study behavioral science

The Rise of Behavioral Economics and Its Influence on Organizations

  • Francesca Gino
  • October 10, 2017

case study behavioral science

When People Listen to Happy Songs, the Market Outperforms

  • Alex Edmans
  • From the January–February 2022 Issue

case study behavioral science

Embracing the Power of Ambivalence

  • Shimul Melwani
  • September 14, 2021

case study behavioral science

Neuromarketing: What You Need to Know

  • Eben Harrell
  • January 23, 2019

How Anger Poisons Decision Making

  • Jennifer S. Lerner
  • Katherine Shonk
  • From the September 2010 Issue

Hotter Heads Prevail

  • Andrew O'Connell
  • From the December 2007 Issue

Life’s Work: Oliver Sacks

  • Lisa Burrell
  • From the November 2010 Issue

case study behavioral science

Tragedy on Everest

  • David Breashears
  • Morten T. Hansen
  • Ludo Van Der Heyden
  • Elin Williams
  • September 28, 2011

Ethics: Awareness

  • Erik Snowberg
  • December 16, 2019

Los Angeles Cleantech Incubator (LACI): Launching a Cleantech Debt Fund

  • Jaclyn C. Foroughi
  • Maureen McNichols
  • March 30, 2022

Applied: Using Behavioral Science to Debias Hiring (B)

  • Ashley V. Whillans
  • Jeffrey T. Polzer
  • March 02, 2021

Helga Wear: The Unzipped Potential of Women's Workwear

  • Vania Sakelaris
  • Janice Byrne
  • June 12, 2023

case study behavioral science

Psychological Safety (HBR Emotional Intelligence Series)

  • Harvard Business Review
  • September 03, 2024

Anti-LGBT2Q+ University Values: Should an Innovative Experiential Exercise be Cancelled?

  • Benjamin Bigio
  • Jana Seijts
  • September 06, 2022

HBR Guide to Beating Burnout

  • December 15, 2020

case study behavioral science

Managing Your Anxiety (HBR Emotional Intelligence Series)

  • Judson Brewer
  • Rasmus Hougaard
  • Jacqueline Carter
  • January 23, 2024

case study behavioral science

The Latest Research: Mental Health and Wellness

  • July 14, 2021

case study behavioral science

I, Human: AI, Automation, and the Quest to Reclaim What Makes Us Unique

  • Thomas Chamorro-Premuzic
  • February 28, 2023

The What Works Centre: Using Behavioral Science to Improve Social Worker Well-being (A)

  • Shibeal O'Flaherty
  • October 02, 2020

case study behavioral science

Persuade with Logic and Emotion

  • Harvard Business Publishing
  • May 15, 2016

Medplus Ltd. (A), (B), (C)

  • Neharika Vohra
  • Chayanika Bhayana
  • Harnain Arora
  • Kashika Sud
  • April 27, 2021

Deion Sanders: The Prime Effect

  • Hise Gibson
  • Nicole Gilmore
  • Alicia Dadlani
  • January 03, 2024

case study behavioral science

Encourage Feedback on Your Team

  • September 14, 2014

Making Progress at Progress Software (B)

  • Katherine Coffman
  • Hannah Riley Bowles
  • Alexis Lefort
  • October 06, 2023

The Psychology of Problem-Solving

  • Gabrielle Adams
  • Gerry Yemen
  • July 30, 2023

case study behavioral science

The Psychological Safety Collection: Establish a Culture of Trust to Build an Innovative, Thriving Team

  • March 06, 2024

case study behavioral science

HBR Guide to Critical Thinking

  • January 31, 2023

case study behavioral science

A Self-Care Guide to Help You Get Through Election Day

  • Rakshitha Arni Ravishankar
  • November 03, 2020

Popular Topics

Partner center.

The Behaviouralist

  • Case Studies

Case studies

Whether you are trying to reduce energy consumption or increase take up of a new technology, behavioural science and evidence-led experimentation can help you identify small changes with big impact.

Filter Categories Filter - All Consumer protection Education Health Local government Sustainability Transportation Utilities

case study behavioral science

  • Transportation A software solution that helps decision-makers optimise fuel efficiency

case study behavioral science

Organizing Your Social Sciences Research Assignments

  • Annotated Bibliography
  • Analyzing a Scholarly Journal Article
  • Group Presentations
  • Dealing with Nervousness
  • Using Visual Aids
  • Grading Someone Else's Paper
  • Types of Structured Group Activities
  • Group Project Survival Skills
  • Leading a Class Discussion
  • Multiple Book Review Essay
  • Reviewing Collected Works
  • Writing a Case Analysis Paper
  • Writing a Case Study
  • About Informed Consent
  • Writing Field Notes
  • Writing a Policy Memo
  • Writing a Reflective Paper
  • Writing a Research Proposal
  • Generative AI and Writing
  • Acknowledgments

A case study research paper examines a person, place, event, condition, phenomenon, or other type of subject of analysis in order to extrapolate  key themes and results that help predict future trends, illuminate previously hidden issues that can be applied to practice, and/or provide a means for understanding an important research problem with greater clarity. A case study research paper usually examines a single subject of analysis, but case study papers can also be designed as a comparative investigation that shows relationships between two or more subjects. The methods used to study a case can rest within a quantitative, qualitative, or mixed-method investigative paradigm.

Case Studies. Writing@CSU. Colorado State University; Mills, Albert J. , Gabrielle Durepos, and Eiden Wiebe, editors. Encyclopedia of Case Study Research . Thousand Oaks, CA: SAGE Publications, 2010 ; “What is a Case Study?” In Swanborn, Peter G. Case Study Research: What, Why and How? London: SAGE, 2010.

How to Approach Writing a Case Study Research Paper

General information about how to choose a topic to investigate can be found under the " Choosing a Research Problem " tab in the Organizing Your Social Sciences Research Paper writing guide. Review this page because it may help you identify a subject of analysis that can be investigated using a case study design.

However, identifying a case to investigate involves more than choosing the research problem . A case study encompasses a problem contextualized around the application of in-depth analysis, interpretation, and discussion, often resulting in specific recommendations for action or for improving existing conditions. As Seawright and Gerring note, practical considerations such as time and access to information can influence case selection, but these issues should not be the sole factors used in describing the methodological justification for identifying a particular case to study. Given this, selecting a case includes considering the following:

  • The case represents an unusual or atypical example of a research problem that requires more in-depth analysis? Cases often represent a topic that rests on the fringes of prior investigations because the case may provide new ways of understanding the research problem. For example, if the research problem is to identify strategies to improve policies that support girl's access to secondary education in predominantly Muslim nations, you could consider using Azerbaijan as a case study rather than selecting a more obvious nation in the Middle East. Doing so may reveal important new insights into recommending how governments in other predominantly Muslim nations can formulate policies that support improved access to education for girls.
  • The case provides important insight or illuminate a previously hidden problem? In-depth analysis of a case can be based on the hypothesis that the case study will reveal trends or issues that have not been exposed in prior research or will reveal new and important implications for practice. For example, anecdotal evidence may suggest drug use among homeless veterans is related to their patterns of travel throughout the day. Assuming prior studies have not looked at individual travel choices as a way to study access to illicit drug use, a case study that observes a homeless veteran could reveal how issues of personal mobility choices facilitate regular access to illicit drugs. Note that it is important to conduct a thorough literature review to ensure that your assumption about the need to reveal new insights or previously hidden problems is valid and evidence-based.
  • The case challenges and offers a counter-point to prevailing assumptions? Over time, research on any given topic can fall into a trap of developing assumptions based on outdated studies that are still applied to new or changing conditions or the idea that something should simply be accepted as "common sense," even though the issue has not been thoroughly tested in current practice. A case study analysis may offer an opportunity to gather evidence that challenges prevailing assumptions about a research problem and provide a new set of recommendations applied to practice that have not been tested previously. For example, perhaps there has been a long practice among scholars to apply a particular theory in explaining the relationship between two subjects of analysis. Your case could challenge this assumption by applying an innovative theoretical framework [perhaps borrowed from another discipline] to explore whether this approach offers new ways of understanding the research problem. Taking a contrarian stance is one of the most important ways that new knowledge and understanding develops from existing literature.
  • The case provides an opportunity to pursue action leading to the resolution of a problem? Another way to think about choosing a case to study is to consider how the results from investigating a particular case may result in findings that reveal ways in which to resolve an existing or emerging problem. For example, studying the case of an unforeseen incident, such as a fatal accident at a railroad crossing, can reveal hidden issues that could be applied to preventative measures that contribute to reducing the chance of accidents in the future. In this example, a case study investigating the accident could lead to a better understanding of where to strategically locate additional signals at other railroad crossings so as to better warn drivers of an approaching train, particularly when visibility is hindered by heavy rain, fog, or at night.
  • The case offers a new direction in future research? A case study can be used as a tool for an exploratory investigation that highlights the need for further research about the problem. A case can be used when there are few studies that help predict an outcome or that establish a clear understanding about how best to proceed in addressing a problem. For example, after conducting a thorough literature review [very important!], you discover that little research exists showing the ways in which women contribute to promoting water conservation in rural communities of east central Africa. A case study of how women contribute to saving water in a rural village of Uganda can lay the foundation for understanding the need for more thorough research that documents how women in their roles as cooks and family caregivers think about water as a valuable resource within their community. This example of a case study could also point to the need for scholars to build new theoretical frameworks around the topic [e.g., applying feminist theories of work and family to the issue of water conservation].

Eisenhardt, Kathleen M. “Building Theories from Case Study Research.” Academy of Management Review 14 (October 1989): 532-550; Emmel, Nick. Sampling and Choosing Cases in Qualitative Research: A Realist Approach . Thousand Oaks, CA: SAGE Publications, 2013; Gerring, John. “What Is a Case Study and What Is It Good for?” American Political Science Review 98 (May 2004): 341-354; Mills, Albert J. , Gabrielle Durepos, and Eiden Wiebe, editors. Encyclopedia of Case Study Research . Thousand Oaks, CA: SAGE Publications, 2010; Seawright, Jason and John Gerring. "Case Selection Techniques in Case Study Research." Political Research Quarterly 61 (June 2008): 294-308.

Structure and Writing Style

The purpose of a paper in the social sciences designed around a case study is to thoroughly investigate a subject of analysis in order to reveal a new understanding about the research problem and, in so doing, contributing new knowledge to what is already known from previous studies. In applied social sciences disciplines [e.g., education, social work, public administration, etc.], case studies may also be used to reveal best practices, highlight key programs, or investigate interesting aspects of professional work.

In general, the structure of a case study research paper is not all that different from a standard college-level research paper. However, there are subtle differences you should be aware of. Here are the key elements to organizing and writing a case study research paper.

I.  Introduction

As with any research paper, your introduction should serve as a roadmap for your readers to ascertain the scope and purpose of your study . The introduction to a case study research paper, however, should not only describe the research problem and its significance, but you should also succinctly describe why the case is being used and how it relates to addressing the problem. The two elements should be linked. With this in mind, a good introduction answers these four questions:

  • What is being studied? Describe the research problem and describe the subject of analysis [the case] you have chosen to address the problem. Explain how they are linked and what elements of the case will help to expand knowledge and understanding about the problem.
  • Why is this topic important to investigate? Describe the significance of the research problem and state why a case study design and the subject of analysis that the paper is designed around is appropriate in addressing the problem.
  • What did we know about this topic before I did this study? Provide background that helps lead the reader into the more in-depth literature review to follow. If applicable, summarize prior case study research applied to the research problem and why it fails to adequately address the problem. Describe why your case will be useful. If no prior case studies have been used to address the research problem, explain why you have selected this subject of analysis.
  • How will this study advance new knowledge or new ways of understanding? Explain why your case study will be suitable in helping to expand knowledge and understanding about the research problem.

Each of these questions should be addressed in no more than a few paragraphs. Exceptions to this can be when you are addressing a complex research problem or subject of analysis that requires more in-depth background information.

II.  Literature Review

The literature review for a case study research paper is generally structured the same as it is for any college-level research paper. The difference, however, is that the literature review is focused on providing background information and  enabling historical interpretation of the subject of analysis in relation to the research problem the case is intended to address . This includes synthesizing studies that help to:

  • Place relevant works in the context of their contribution to understanding the case study being investigated . This would involve summarizing studies that have used a similar subject of analysis to investigate the research problem. If there is literature using the same or a very similar case to study, you need to explain why duplicating past research is important [e.g., conditions have changed; prior studies were conducted long ago, etc.].
  • Describe the relationship each work has to the others under consideration that informs the reader why this case is applicable . Your literature review should include a description of any works that support using the case to investigate the research problem and the underlying research questions.
  • Identify new ways to interpret prior research using the case study . If applicable, review any research that has examined the research problem using a different research design. Explain how your use of a case study design may reveal new knowledge or a new perspective or that can redirect research in an important new direction.
  • Resolve conflicts amongst seemingly contradictory previous studies . This refers to synthesizing any literature that points to unresolved issues of concern about the research problem and describing how the subject of analysis that forms the case study can help resolve these existing contradictions.
  • Point the way in fulfilling a need for additional research . Your review should examine any literature that lays a foundation for understanding why your case study design and the subject of analysis around which you have designed your study may reveal a new way of approaching the research problem or offer a perspective that points to the need for additional research.
  • Expose any gaps that exist in the literature that the case study could help to fill . Summarize any literature that not only shows how your subject of analysis contributes to understanding the research problem, but how your case contributes to a new way of understanding the problem that prior research has failed to do.
  • Locate your own research within the context of existing literature [very important!] . Collectively, your literature review should always place your case study within the larger domain of prior research about the problem. The overarching purpose of reviewing pertinent literature in a case study paper is to demonstrate that you have thoroughly identified and synthesized prior studies in relation to explaining the relevance of the case in addressing the research problem.

III.  Method

In this section, you explain why you selected a particular case [i.e., subject of analysis] and the strategy you used to identify and ultimately decide that your case was appropriate in addressing the research problem. The way you describe the methods used varies depending on the type of subject of analysis that constitutes your case study.

If your subject of analysis is an incident or event . In the social and behavioral sciences, the event or incident that represents the case to be studied is usually bounded by time and place, with a clear beginning and end and with an identifiable location or position relative to its surroundings. The subject of analysis can be a rare or critical event or it can focus on a typical or regular event. The purpose of studying a rare event is to illuminate new ways of thinking about the broader research problem or to test a hypothesis. Critical incident case studies must describe the method by which you identified the event and explain the process by which you determined the validity of this case to inform broader perspectives about the research problem or to reveal new findings. However, the event does not have to be a rare or uniquely significant to support new thinking about the research problem or to challenge an existing hypothesis. For example, Walo, Bull, and Breen conducted a case study to identify and evaluate the direct and indirect economic benefits and costs of a local sports event in the City of Lismore, New South Wales, Australia. The purpose of their study was to provide new insights from measuring the impact of a typical local sports event that prior studies could not measure well because they focused on large "mega-events." Whether the event is rare or not, the methods section should include an explanation of the following characteristics of the event: a) when did it take place; b) what were the underlying circumstances leading to the event; and, c) what were the consequences of the event in relation to the research problem.

If your subject of analysis is a person. Explain why you selected this particular individual to be studied and describe what experiences they have had that provide an opportunity to advance new understandings about the research problem. Mention any background about this person which might help the reader understand the significance of their experiences that make them worthy of study. This includes describing the relationships this person has had with other people, institutions, and/or events that support using them as the subject for a case study research paper. It is particularly important to differentiate the person as the subject of analysis from others and to succinctly explain how the person relates to examining the research problem [e.g., why is one politician in a particular local election used to show an increase in voter turnout from any other candidate running in the election]. Note that these issues apply to a specific group of people used as a case study unit of analysis [e.g., a classroom of students].

If your subject of analysis is a place. In general, a case study that investigates a place suggests a subject of analysis that is unique or special in some way and that this uniqueness can be used to build new understanding or knowledge about the research problem. A case study of a place must not only describe its various attributes relevant to the research problem [e.g., physical, social, historical, cultural, economic, political], but you must state the method by which you determined that this place will illuminate new understandings about the research problem. It is also important to articulate why a particular place as the case for study is being used if similar places also exist [i.e., if you are studying patterns of homeless encampments of veterans in open spaces, explain why you are studying Echo Park in Los Angeles rather than Griffith Park?]. If applicable, describe what type of human activity involving this place makes it a good choice to study [e.g., prior research suggests Echo Park has more homeless veterans].

If your subject of analysis is a phenomenon. A phenomenon refers to a fact, occurrence, or circumstance that can be studied or observed but with the cause or explanation to be in question. In this sense, a phenomenon that forms your subject of analysis can encompass anything that can be observed or presumed to exist but is not fully understood. In the social and behavioral sciences, the case usually focuses on human interaction within a complex physical, social, economic, cultural, or political system. For example, the phenomenon could be the observation that many vehicles used by ISIS fighters are small trucks with English language advertisements on them. The research problem could be that ISIS fighters are difficult to combat because they are highly mobile. The research questions could be how and by what means are these vehicles used by ISIS being supplied to the militants and how might supply lines to these vehicles be cut off? How might knowing the suppliers of these trucks reveal larger networks of collaborators and financial support? A case study of a phenomenon most often encompasses an in-depth analysis of a cause and effect that is grounded in an interactive relationship between people and their environment in some way.

NOTE:   The choice of the case or set of cases to study cannot appear random. Evidence that supports the method by which you identified and chose your subject of analysis should clearly support investigation of the research problem and linked to key findings from your literature review. Be sure to cite any studies that helped you determine that the case you chose was appropriate for examining the problem.

IV.  Discussion

The main elements of your discussion section are generally the same as any research paper, but centered around interpreting and drawing conclusions about the key findings from your analysis of the case study. Note that a general social sciences research paper may contain a separate section to report findings. However, in a paper designed around a case study, it is common to combine a description of the results with the discussion about their implications. The objectives of your discussion section should include the following:

Reiterate the Research Problem/State the Major Findings Briefly reiterate the research problem you are investigating and explain why the subject of analysis around which you designed the case study were used. You should then describe the findings revealed from your study of the case using direct, declarative, and succinct proclamation of the study results. Highlight any findings that were unexpected or especially profound.

Explain the Meaning of the Findings and Why They are Important Systematically explain the meaning of your case study findings and why you believe they are important. Begin this part of the section by repeating what you consider to be your most important or surprising finding first, then systematically review each finding. Be sure to thoroughly extrapolate what your analysis of the case can tell the reader about situations or conditions beyond the actual case that was studied while, at the same time, being careful not to misconstrue or conflate a finding that undermines the external validity of your conclusions.

Relate the Findings to Similar Studies No study in the social sciences is so novel or possesses such a restricted focus that it has absolutely no relation to previously published research. The discussion section should relate your case study results to those found in other studies, particularly if questions raised from prior studies served as the motivation for choosing your subject of analysis. This is important because comparing and contrasting the findings of other studies helps support the overall importance of your results and it highlights how and in what ways your case study design and the subject of analysis differs from prior research about the topic.

Consider Alternative Explanations of the Findings Remember that the purpose of social science research is to discover and not to prove. When writing the discussion section, you should carefully consider all possible explanations revealed by the case study results, rather than just those that fit your hypothesis or prior assumptions and biases. Be alert to what the in-depth analysis of the case may reveal about the research problem, including offering a contrarian perspective to what scholars have stated in prior research if that is how the findings can be interpreted from your case.

Acknowledge the Study's Limitations You can state the study's limitations in the conclusion section of your paper but describing the limitations of your subject of analysis in the discussion section provides an opportunity to identify the limitations and explain why they are not significant. This part of the discussion section should also note any unanswered questions or issues your case study could not address. More detailed information about how to document any limitations to your research can be found here .

Suggest Areas for Further Research Although your case study may offer important insights about the research problem, there are likely additional questions related to the problem that remain unanswered or findings that unexpectedly revealed themselves as a result of your in-depth analysis of the case. Be sure that the recommendations for further research are linked to the research problem and that you explain why your recommendations are valid in other contexts and based on the original assumptions of your study.

V.  Conclusion

As with any research paper, you should summarize your conclusion in clear, simple language; emphasize how the findings from your case study differs from or supports prior research and why. Do not simply reiterate the discussion section. Provide a synthesis of key findings presented in the paper to show how these converge to address the research problem. If you haven't already done so in the discussion section, be sure to document the limitations of your case study and any need for further research.

The function of your paper's conclusion is to: 1) reiterate the main argument supported by the findings from your case study; 2) state clearly the context, background, and necessity of pursuing the research problem using a case study design in relation to an issue, controversy, or a gap found from reviewing the literature; and, 3) provide a place to persuasively and succinctly restate the significance of your research problem, given that the reader has now been presented with in-depth information about the topic.

Consider the following points to help ensure your conclusion is appropriate:

  • If the argument or purpose of your paper is complex, you may need to summarize these points for your reader.
  • If prior to your conclusion, you have not yet explained the significance of your findings or if you are proceeding inductively, use the conclusion of your paper to describe your main points and explain their significance.
  • Move from a detailed to a general level of consideration of the case study's findings that returns the topic to the context provided by the introduction or within a new context that emerges from your case study findings.

Note that, depending on the discipline you are writing in or the preferences of your professor, the concluding paragraph may contain your final reflections on the evidence presented as it applies to practice or on the essay's central research problem. However, the nature of being introspective about the subject of analysis you have investigated will depend on whether you are explicitly asked to express your observations in this way.

Problems to Avoid

Overgeneralization One of the goals of a case study is to lay a foundation for understanding broader trends and issues applied to similar circumstances. However, be careful when drawing conclusions from your case study. They must be evidence-based and grounded in the results of the study; otherwise, it is merely speculation. Looking at a prior example, it would be incorrect to state that a factor in improving girls access to education in Azerbaijan and the policy implications this may have for improving access in other Muslim nations is due to girls access to social media if there is no documentary evidence from your case study to indicate this. There may be anecdotal evidence that retention rates were better for girls who were engaged with social media, but this observation would only point to the need for further research and would not be a definitive finding if this was not a part of your original research agenda.

Failure to Document Limitations No case is going to reveal all that needs to be understood about a research problem. Therefore, just as you have to clearly state the limitations of a general research study , you must describe the specific limitations inherent in the subject of analysis. For example, the case of studying how women conceptualize the need for water conservation in a village in Uganda could have limited application in other cultural contexts or in areas where fresh water from rivers or lakes is plentiful and, therefore, conservation is understood more in terms of managing access rather than preserving access to a scarce resource.

Failure to Extrapolate All Possible Implications Just as you don't want to over-generalize from your case study findings, you also have to be thorough in the consideration of all possible outcomes or recommendations derived from your findings. If you do not, your reader may question the validity of your analysis, particularly if you failed to document an obvious outcome from your case study research. For example, in the case of studying the accident at the railroad crossing to evaluate where and what types of warning signals should be located, you failed to take into consideration speed limit signage as well as warning signals. When designing your case study, be sure you have thoroughly addressed all aspects of the problem and do not leave gaps in your analysis that leave the reader questioning the results.

Case Studies. Writing@CSU. Colorado State University; Gerring, John. Case Study Research: Principles and Practices . New York: Cambridge University Press, 2007; Merriam, Sharan B. Qualitative Research and Case Study Applications in Education . Rev. ed. San Francisco, CA: Jossey-Bass, 1998; Miller, Lisa L. “The Use of Case Studies in Law and Social Science Research.” Annual Review of Law and Social Science 14 (2018): TBD; Mills, Albert J., Gabrielle Durepos, and Eiden Wiebe, editors. Encyclopedia of Case Study Research . Thousand Oaks, CA: SAGE Publications, 2010; Putney, LeAnn Grogan. "Case Study." In Encyclopedia of Research Design , Neil J. Salkind, editor. (Thousand Oaks, CA: SAGE Publications, 2010), pp. 116-120; Simons, Helen. Case Study Research in Practice . London: SAGE Publications, 2009;  Kratochwill,  Thomas R. and Joel R. Levin, editors. Single-Case Research Design and Analysis: New Development for Psychology and Education .  Hilldsale, NJ: Lawrence Erlbaum Associates, 1992; Swanborn, Peter G. Case Study Research: What, Why and How? London : SAGE, 2010; Yin, Robert K. Case Study Research: Design and Methods . 6th edition. Los Angeles, CA, SAGE Publications, 2014; Walo, Maree, Adrian Bull, and Helen Breen. “Achieving Economic Benefits at Local Events: A Case Study of a Local Sports Event.” Festival Management and Event Tourism 4 (1996): 95-106.

Writing Tip

At Least Five Misconceptions about Case Study Research

Social science case studies are often perceived as limited in their ability to create new knowledge because they are not randomly selected and findings cannot be generalized to larger populations. Flyvbjerg examines five misunderstandings about case study research and systematically "corrects" each one. To quote, these are:

Misunderstanding 1 :  General, theoretical [context-independent] knowledge is more valuable than concrete, practical [context-dependent] knowledge. Misunderstanding 2 :  One cannot generalize on the basis of an individual case; therefore, the case study cannot contribute to scientific development. Misunderstanding 3 :  The case study is most useful for generating hypotheses; that is, in the first stage of a total research process, whereas other methods are more suitable for hypotheses testing and theory building. Misunderstanding 4 :  The case study contains a bias toward verification, that is, a tendency to confirm the researcher’s preconceived notions. Misunderstanding 5 :  It is often difficult to summarize and develop general propositions and theories on the basis of specific case studies [p. 221].

While writing your paper, think introspectively about how you addressed these misconceptions because to do so can help you strengthen the validity and reliability of your research by clarifying issues of case selection, the testing and challenging of existing assumptions, the interpretation of key findings, and the summation of case outcomes. Think of a case study research paper as a complete, in-depth narrative about the specific properties and key characteristics of your subject of analysis applied to the research problem.

Flyvbjerg, Bent. “Five Misunderstandings About Case-Study Research.” Qualitative Inquiry 12 (April 2006): 219-245.

  • << Previous: Writing a Case Analysis Paper
  • Next: Writing a Field Report >>
  • Last Updated: Jun 3, 2024 9:44 AM
  • URL: https://libguides.usc.edu/writingguide/assignments

Loneliness is Increasing. Our Behavioral Science Experiment Found an Unexpected Way of Boosting Connection

Case studies.

Curious how behavioral science can improve product design? Dive into our case studies from tech, finance, health, and more.

case study behavioral science

Want to boost cryptocurrency adoption? Learn how our insights and ‘The Satoshi Experiment’ can help unlock behavioral science’s potential in product development.

case study behavioral science

How do we fight the loneliness epidemic? Our behavioral science experiment found an unexpected way to boost connection.

case study behavioral science

How can we enhance communication to increase participation in the RESEA program and improve job seeker engagement in finding meaningful employment?

case study behavioral science

How do you help people repay loans faster? Common Cents Lab, a Duke University initiative co-led by Kristen Berman, Wendy Da La Rosa, and Mariel Beasley, worked with EarnUp to do this using behavioral insights.

case study behavioral science

How do you increase new patient engagement? We designed a suite of behavioral interventions for Belong Health that accomplished this. Read on to discover how we did it and what we learned.

case study behavioral science

How do you make physician reports easier to parse and more useful for providers? Learn how we worked with Belong Health to hone in on a critical source of data overload for physicians.

case study behavioral science

Could telling people when they’re spending more than others help them spend less? Common Cents Lab partnered with Arizona Financial Credit Union to learn how social proof can impact financial behavior.

case study behavioral science

How do you get more users to convert if you have a very long funnel? Common Cents Lab partnered with Kiva to find out—leading to $190K+ in additional credit for LMI small business owners.

case study behavioral science

How do you leverage tax refunds as an opportunity for long-term financial security? Common Cents Lab partnered with San Francisco fintech startup Digit to do just that.

case study behavioral science

How does a top business school transform student interaction and created a sense of belonging in a post-pandemic world?

case study behavioral science

The Common Cents Lab created an intervention that helps consumers to lower their interest on their credit card. They call it ‘Kill Bill.’

TytoCare case study

We designed interventions for the TytoCare device and user journey that drove a 120% increase in devices sold and a 65% increase in completed medical visits.

What’s your most irrational behavior?

Neuromarketing — Predicting Consumer Behavior to Drive Purchasing Decisions 

Buying decisions can be driven by unconscious choices. Learn about how neuromarketing uncovers what drives decisions to increase conversions and revenue.

Valerie Kirk

What drives a person to not only buy something, but to choose one product or service over the other? The usual answers that come to a marketer’s mind when asked that question include need, price, availability, and brand familiarity.

But what if it goes deeper than that? What if consumer decision-making is driven by biology — specifically neural activity in the brain?

This idea is the basis of neuromarketing — sometimes known as consumer neuroscience — a field of study that incorporates biology and brain activity to predict and even influence consumer behavior and purchase decisions.

The Science Behind Neuromarketing

While the term neuromarketing was first introduced in the early 2000s, consumer neuroscience began to emerge in the 1990s, when measuring brain activity using functional magnetic resonance imaging (fMRI) machines became more accessible. 

Consumer neuroscience examines fMRI scans and electroencephalogram measurements of people’s brain activity when they are given or shown stimuli, such as an advertisement, product packaging, or something to drink. It could also include verbal prompts to monitor reactions. The brain activity seen on the scans shows what a person is feeling in that moment. 

Consumer neuroscience also includes physiological tracking — measuring facial expressions, eye movements, pupil dilation, heart rate, or other physical reactions people experience when given the stimuli. With eye tracking software, marketers can use heat maps to see what consumers are most drawn to in ad campaigns or websites and the journey they take to ultimately purchase something or disengage with digital assets. 

Examples of neuromarketing research include: 

  • Serving Coca-Cola and Pepsi to subjects in an fMRI machine. When the drinks weren’t identified, the researchers noted a consistent neural response. But when subjects could see the brand, the part of their brains associated with emotions, memories, and unconscious processing showed enhanced activity, demonstrating that knowledge of the brand altered how the brain perceived the beverage. 
  • Scanning the brains of test subjects while they tasted three wines, each labeled with a different price. Their brains registered the wines differently, with neural signatures indicating a preference for the most expensive wine. In actuality, all three wines were the same. 

Why is Neuromarketing Important?

By understanding what people react to based on biology and not conscious choices, marketers can essentially predict consumer behavior. When marketers can predict behavior, they can take steps to market their products — from the price to packaging to product marketing campaigns — in ways that elicit emotional responses and compel consumers to buy, thus increasing sales and revenue. 

There is a truth to neuromarketing that can’t be replicated by traditional marketing research tactics like focus groups. People may not always tell the truth in focus groups, or they say things they think others want to hear. 

Neuromarketing techniques remove the human choice element in market research and expose a person’s real and unfiltered responses. This helps marketers gain a more complete understanding of consumer motivation and buying behavior, which drives marketing decisions and budget spending.

How is Neuromarketing Used in Business Today?

Businesses are turning to neuromarketing to guide critical marketing decisions. In many cases, neuromarketing techniques are replacing traditional marketing research tactics. 

Here are five ways businesses are using neuromarketing to improve their marketing efforts and drive sales. 

1. Testing Ads 

Marketers can get true, unbiased responses to ad campaigns by showing different ads to test subjects and scanning their brain activity or tracking their eye movement while they view the ads. Based on the scans and other physiological and emotional reactions, they can determine which campaign — or which campaign elements — resonate more with consumers.  

2. Improving Packaging Design

When test subjects are given early prototypes of a product packaging, brain scans can help marketing and design teams gain insights into which version people are more likely to pick up and buy. Package design includes color, images, and size and shape. 

3. Enhancing Website and App Design 

Neuromarketing can help guide website and app design. Brain scans can show which design elements are more likely to engage users and drive clicks and purchases. Facial coding can also show how people view websites and apps, which can inform where to put different pieces of content. 

4. Informing Rebranding

From start to finish, neuromarketing can guide decisions on rebranding. This includes whether a rebrand is needed, which visual elements and messages work better for the new brand, and how to use the new identity in marketing tools and other brand assets. 

5. Optimizing Conversion Rates 

It’s estimated that 95 percent of decision-making is made unconsciously. Neuromarketing can help marketers understand what drives a person to make those unconscious choices to buy or not buy a product. Brands can then adapt their marketing materials and tactics to enhance elements that inspire people to buy.

DCE Professional & Executive Development Consumer Behavior Course:

Using Neuromarketing to Predict and Influence Customers

Examples of Neuromarketing in Action

  • Through neuromarketing techniques, Frito-Lay learned that matte bags with pictures of potatoes did not trigger a negative consumer response, whereas shiny bags with pictures did. Based on those insights, they changed their chip packaging design. 
  • The National Cancer Institute used fMRI scans to test three anti-smoking commercials that included a telephone hotline. The subjects were heavy smokers who indicated they wanted to quit. The National Cancer Institute ran all three ads, but the ad to which the test group reacted favorably corresponded to an increased hotline call volume when it ran.
  • IKEA has designed their stores in a way that showcases everything they sell before a consumer can actually leave the store, thus increasing the likelihood of a purchase. The layout was developed using neuromarketing research.
  • Neuromarketing research has shown that people react favorably to movement and speed. This knowledge guided FedEx to include a hidden arrow in its logo that represents quickness, which garners favorable reactions — and subconscious brand trust — among consumers.
  • People also react favorably to color. Through research on brain activity, businesses know that the color red signifies strength. It’s easy to see why red is the favored logo color of so many iconic brands, including Coca-Cola, Target, McDonald’s, and Netflix.  

The Ethics of Neuromarketing

In general, people like to think that they make purchasing decisions — and really any decision — consciously after considering all of the options and facts. Neuromarketing exposes the fact that people can be influenced on an unconscious level. This realization can lead not only to privacy concerns but also to people feeling like they are being manipulated by brands they trust, which could make them avoid those brands entirely. 

For example , in 2015, one of the main political parties in Mexico used neuromarketing to learn more about voters’ interests and reactions to campaign ads. When the information leaked, there was a backlash from Mexican citizens. The candidate apologized, but the revelation likely cost him votes. 

Since the very first advertisement, businesses have been trying to persuade people to buy products. Neuromarketing uses the technology of the time to help marketers understand their customers better and deliver a more favorable experience. Currently, brain scans and physiological responses are being performed on test subjects who all have likely signed an informed consent document.

While it may seem like a logical progression of the marketing and advertising discipline, companies that use neuromarketing techniques should have robust and ethical protocols and a crisis communication plan in place in case of public backlash.

How to Study Neuromarketing

People working across marketing disciplines could benefit from understanding what drives consumer behavior. Harvard Division of Continuing Education Professional & Executive Development offers a 2-day Consumer Behavior Course: Using Neuromarketing to Predict and Influence Customers.

The course covers a wide range of topics to help participants understand the psychology of consumer behavior and how to apply it. Participants will come away with a new set of tools for creating marketing campaigns that effectively resonate with the consumer base, capture market share, and ultimately drive profits and sales.

The program includes a discussion on corporate responsibility, marketing ethics, and specific guidelines for utilizing psychological techniques while safeguarding consumer and societal well-being.

Marketing Analytics Online Course: Strategies for Driving Business Results

Digital Marketing Strategy

Behavioral Decision Making

About the Author

Valerie Kirk is a freelance writer and corporate storyteller specializing in customer and community outreach and topics and trends in education, technology, and healthcare. Based in Maryland near the Chesapeake Bay, she spends her free time exploring nature by bike, paddle board, or on long hikes with her family.

How to Determine What My Leadership Style Is

Determining your leadership style is key to the success of your team, your organization, and your growth as a leader.

Harvard Division of Continuing Education

The Division of Continuing Education (DCE) at Harvard University is dedicated to bringing rigorous academics and innovative teaching capabilities to those seeking to improve their lives through education. We make Harvard education accessible to lifelong learners from high school to retirement.

Harvard Division of Continuing Education Logo

  • Media Center

A stack of books in a classroom

Giving California School Districts the Tools to Transform Their Students’ Learning

read time - icon

The use of higher-quality instructional materials in schools has been shown to improve math achievement and close achievement gaps . However, the curriculum selection process can be time-intensive, and the immense number of materials on the market can make it difficult for decision-makers to determine which curriculum is best for their students. 

Our team at The Decision Lab has previously tried to map out this choice overload environment for instructional materials in another research project. After that initiative was complete, we realized there was still more work to be done to ensure that school districts are able to make evidence-based decisions about curriculum purchasing. We set out to develop and test some real-world solutions to the problem.

On the ground in California 

California has more K-12 students than any other U.S. State, many of whom are Latinx, Black, and low-income students. California updated their formal math materials adoption list in 2014, with the next scheduled update not arriving until 2024. In the interim, many school boards were hesitant to choose new math materials without up-to-date guidelines. 

In order to help districts prepare for their upcoming adoption process, the California Curriculum Collaborative (CalCurriculum) , composed of Pivot Learning and EdReports , recruited a cohort of 12 California school districts to test a number of interventions with. The goal of the program was to provide training sessions and workshops focused on increasing capacity within their district adoption teams to identify a high-quality curriculum in their next cycle. 

Supporting evidence-based decision-making

TDL’s role in supporting the cohort learning model was two-fold: research support by way of evidence, and developing behavioral interventions for district adoption processes.

In order to understand the problems districts face in curriculum selection, we developed research-gathering instruments for the cohort, including interviews, focus groups, and survey protocols. These methods allowed us to give voice to educational actors on the ground — the people who were actually tasked with making math curriculum selections. We also developed and reviewed pre- and post-cohort surveys to more accurately understand how our research activities were shifting participants’ attitudes about curriculum selection. 

Drawing from the knowledge gathered on adoption processes, TDL developed resources to help district leaders overcome common barriers they experience in arriving at a decision. These were presented to districts during the cohort to aid them in their math adoption processes.

Transforming math material adoption processes

case study behavioral science

Our collaboration with CalCurriculum and the California Department of Education resulted in evidence-based recommendations for both California and national school districts. If you’re interested in reading more about our collaboration with CalCurriculum and the resulting recommendations, you can read the full report  here . 

Two individuals are lying on a green outdoor surface, smiling and enjoying the sun while wearing light-colored clothing.

A Revolution in Youth Mental Health Care

Digitizing Stepped Care to build a best-in-class youth mental health platform

Paper hearts

Designing a Cutting-Edge Health App to Get Canadians Moving

Designing a cutting-edge health app to get Canadians moving

Image of hand touching a smart phone

Improving financial inclusion through evidence-based behavioral product design

We helped a leading fintech company redesign their app to increase conversions and engagement.

A screen from the COVI app, on top of a colorful background.

Flattening the Curve of COVID-19 With AI and Behavioral Product Design

In early 2020, we partnered with the world-renowned Mila AI Institute to build a groundbreaking app. Learn how we empowered Canadians in the fight against COVID-19.

Notes illustration

Eager to learn about how behavioral science can help your organization?

Get new behavioral science insights in your inbox every month..


  1. ABS200 / ABS 200 / Final Project Case Study

    case study behavioral science

  2. The What Works Centre Using Behavioral Science to Improve Social Worker

    case study behavioral science

  3. Psychology and Behavioral Science Case Study D.docx

    case study behavioral science

  4. Case Study: Behavioral Data Science Challenge

    case study behavioral science

  5. Behavioral science, energy use and interaction design (case study

    case study behavioral science

  6. Introducing the Behavioral Mapping Case Study & Cheat Sheet

    case study behavioral science


  1. Organizational Behaviour Case Study

  2. BESC 3010: Review for Exam 2

  3. Ep 026: Science Practitioner Model Part 1 / 2

  4. Substance use disorders:physician case study part 1: ePhysicianHealth.com

  5. How to study Behavioral Science (StudywithKennedy)

  6. 5 Minutes Lecture Sample of Cambridge A Level Psychology: Research Methods


  1. Case Studies

    Columnists Thought leadership from the front lines of behavioral science Case Studies A look at some of our most impactful work. Contact. Case Studies. A look at some of our most impactful work. Select a case study category. all health education consumer behavior technology & ai climate & sustainability development public policy.

  2. UC Berkeley

    During the 2020-21 school year, more than 60% of college students experienced some kind of disturbance to their mental health - a 50% increase from 2013. As researchers have noted, there are a few caveats that come with this statistic. For one, the COVID-19 pandemic has almost certainly contributed to a spike in mental health issues, on ...

  3. A Case Study of Using Behavioural Science in Practice

    The case study shows how the principles for applying behavioural science described in earlier chapters were used to solve the challenge. In this chapter, Rubinstein outlines the activities that enabled the development of a behavioural model of passenger behaviour, and how a five-step approach to intervention design was applied.

  4. A manifesto for applying behavioural science

    Behavioural insights. The application of findings from behavioural science to analyse and address practical issues in real-world settings, usually coupled with a rigorous evaluation of the effects ...

  5. Applied behavior analysis: Fifty case studies in home, school, and

    This textbook offers real-world case studies for using Applied Behavior Analysis (ABA) to create, implement, and appraise behavior intervention programs across a variety of client situations. Its chapters are formatted for ease of use and retention and organized to focus on the core components of ABA: assessment, planning, implementation, evaluation, and research/ethics. Illustrative cases ...

  6. PDF PART II: Case Studies

    Behavioral Science Lead, Center for Behavior & the Environment, Rare Recommended citation: Williamson, K., Bujold, P. M., & Thulin, E. (2020). Behavior Change Interventions in Practice: A synthesis of criteria, ... behavior change work. Case study 1: Costa Rica Decreasing water consumption among households Case study 4: Brazil Decreasing wild ...

  7. Behavioral science

    Listen to the interview this piece is based on.Download this podcast Oliver Sacks embarked on a career in neurology because, he says, "the brain both shapes us and is shaped by us—it is who we ...

  8. Behavioural Science Case Studies

    Case studies. Whether you are trying to reduce energy consumption or increase take up of a new technology, behavioural science and evidence-led experimentation can help you identify small changes with big impact.

  9. World Bank (II)

    Past research by the World Bank found that there are systemic problems on both the supply and demand side: weak supply chains and poor poorly capitalized distributors limit consumers' exposure to new technologies like ICSs, which in turn reduces demand. But at the individual level, there are also behavioral barriers that limit demand for ICSs.

  10. Behavior: Articles, Research, & Case Studies on Behavior- HBS Working

    Behavior. New insights in behavior from Harvard Business School faculty on issues including how to foster and utilize group loyalty within organizations, giving and taking advice, motivation, and how managers can practice responsive listening. Page 1 of 73 Results →. 26 Mar 2024.

  11. Writing a Case Study

    A case study is a research method that involves an in-depth analysis of a real-life phenomenon or situation. Learn how to write a case study for your social sciences research assignments with this helpful guide from USC Library. Find out how to define the case, select the data sources, analyze the evidence, and report the results.


    This case study is designed as a teaching guide for graduate and undergradu-ate students in behavioral science courses, as well as practitioners interested in this field. It is intended to help readers implement the "behavioral diagnosis and design" methodology developed by BIAS: a multistage process in which research-

  13. Developing interventions to change recycling behaviors: A case study of

    This research is the first case study to demonstrate how the TDF and the BCW can be used to develop recycling interventions. Acknowledgments The authors would like to thank the building users and administrators who participated in this research and members of UCL Estates who helped to facilitate the initial intervention.

  14. Insuring Behavior Change

    It's no wonder that dishonest claims are such a widespread problem, estimated to cost $40 billion each year in the U.S. alone. Behavioral science is an ideal tool to help firms to provide a better customer experience and tackle fraud. Research suggests that up to 30% of claim revenue is lost to "soft" fraud, or "smalltime cheating by ...

  15. Case Study: How to Apply Behavioral Science and Employee Co ...

    Gartner Research on Case Study: How to Apply Behavioral Science and Employee Co-Creation to Reimagine Compliance and Ethics (Novartis) ... Gartner research, which includes in-depth proprietary studies, peer and industry best practices, trend analysis and quantitative modeling, enables us to offer innovative approaches that can help you drive ...

  16. Case Studies

    View Case Study. The Common Cents Lab created an intervention that helps consumers to lower their interest on their credit card. They call it 'Kill Bill.'. View Case Study. We designed interventions for the TytoCare device and user journey that drove a 120% increase in devices sold and a 65% increase in completed medical visits.

  17. Judgments: Articles, Research, & Case Studies on Judgments

    Judgments. New research on judgments from Harvard Business School faculty on issues including the role of judgment in decision making, why we have difficulty selecting leaders, and the potentially harmful effects of built-in biases. Page 1 of 11 Results. 04 Mar 2024.

  18. Behavioral Finance: Articles, Research, & Case Studies

    Behavioral finance replaces the traditional and idealized idea of rational decision makers with real and imperfect people who have social, cognitive, and emotional biases. The resulting inefficiencies in the capital markets can create opportunities for investment managers and firms. Closed for comment; 0 Comments. 1.

  19. Case Study: Apply Behavioral Science to Reimagine Compliance ...

    Download this case study and discover how Novartis' novel approach and success, guided by two design principles — behavioral science and employee co-creation — helped them embed ethics in the organization and better empower associates to do what's right: Human-centric decision support informed by behavioral science. Inclusive co ...

  20. Chronwell

    Livvy's parent organization, the digital therapeutics company Chronwell, approached TDL for help with giving their product a behavioral boost. Chronwell wanted our help leveraging behavioral insights and user-centered design (UCD) to keep Livvy users engaged, proactive, and feeling empowered to tackle their own health management.

  21. Neuromarketing

    This idea is the basis of neuromarketing — sometimes known as consumer neuroscience — a field of study that incorporates biology and brain activity to predict and even influence consumer behavior and purchase decisions.. The Science Behind Neuromarketing. While the term neuromarketing was first introduced in the early 2000s, consumer neuroscience began to emerge in the 1990s, when ...

  22. Behavioral Sciences

    Behavioral Sciences is an international, peer-reviewed, open access journal on psychology, neuroscience, cognitive science, behavioral biology and behavioral genetics published monthly online by MDPI.. Open Access — free for readers, with article processing charges (APC) paid by authors or their institutions.; High Visibility: indexed within Scopus, SSCI (Web of Science), PubMed, PMC, and ...

  23. CalCurriculum

    TDL's role in supporting the cohort learning model was two-fold: research support by way of evidence, and developing behavioral interventions for district adoption processes. In order to understand the problems districts face in curriculum selection, we developed research-gathering instruments for the cohort, including interviews, focus ...

  24. Behavioral Science + Psychology Case Studies

    How Red Bull Uses Psychology to Sell 15k Cans a Minute. Psychology and Behavioral Science Case Studies Looking for behavioral science and psychology case studies for marketing, UX, and product design? Check out Choice Hacking's ever-expanding library below. Have a brand you want to see featured? Send me (Jen) and email and let me know.