Sample Size Policy for Qualitative Studies Using In-Depth Interviews

  • Published: 12 September 2012
  • Volume 41 , pages 1319–1320, ( 2012 )

Cite this article

number of participants in qualitative research

  • Shari L. Dworkin 1  

296k Accesses

565 Citations

28 Altmetric

Explore all metrics

Avoid common mistakes on your manuscript.

In recent years, there has been an increase in submissions to the Journal that draw on qualitative research methods. This increase is welcome and indicates not only the interdisciplinarity embraced by the Journal (Zucker, 2002 ) but also its commitment to a wide array of methodologies.

For those who do select qualitative methods and use grounded theory and in-depth interviews in particular, there appear to be a lot of questions that authors have had recently about how to write a rigorous Method section. This topic will be addressed in a subsequent Editorial. At this time, however, the most common question we receive is: “How large does my sample size have to be?” and hence I would like to take this opportunity to answer this question by discussing relevant debates and then the policy of the Archives of Sexual Behavior . Footnote 1

The sample size used in qualitative research methods is often smaller than that used in quantitative research methods. This is because qualitative research methods are often concerned with garnering an in-depth understanding of a phenomenon or are focused on meaning (and heterogeneities in meaning )—which are often centered on the how and why of a particular issue, process, situation, subculture, scene or set of social interactions. In-depth interview work is not as concerned with making generalizations to a larger population of interest and does not tend to rely on hypothesis testing but rather is more inductive and emergent in its process. As such, the aim of grounded theory and in-depth interviews is to create “categories from the data and then to analyze relationships between categories” while attending to how the “lived experience” of research participants can be understood (Charmaz, 1990 , p. 1162).

There are several debates concerning what sample size is the right size for such endeavors. Most scholars argue that the concept of saturation is the most important factor to think about when mulling over sample size decisions in qualitative research (Mason, 2010 ). Saturation is defined by many as the point at which the data collection process no longer offers any new or relevant data. Another way to state this is that conceptual categories in a research project can be considered saturated “when gathering fresh data no longer sparks new theoretical insights, nor reveals new properties of your core theoretical categories” (Charmaz, 2006 , p. 113). Saturation depends on many factors and not all of them are under the researcher’s control. Some of these include: How homogenous or heterogeneous is the population being studied? What are the selection criteria? How much money is in the budget to carry out the study? Are there key stratifiers (e.g., conceptual, demographic) that are critical for an in-depth understanding of the topic being examined? What is the timeline that the researcher faces? How experienced is the researcher in being able to even determine when she or he has actually reached saturation (Charmaz, 2006 )? Is the author carrying out theoretical sampling and is, therefore, concerned with ensuring depth on relevant concepts and examining a range of concepts and characteristics that are deemed critical for emergent findings (Glaser & Strauss, 1967 ; Strauss & Corbin, 1994 , 2007 )?

While some experts in qualitative research avoid the topic of “how many” interviews “are enough,” there is indeed variability in what is suggested as a minimum. An extremely large number of articles, book chapters, and books recommend guidance and suggest anywhere from 5 to 50 participants as adequate. All of these pieces of work engage in nuanced debates when responding to the question of “how many” and frequently respond with a vague (and, actually, reasonable) “it depends.” Numerous factors are said to be important, including “the quality of data, the scope of the study, the nature of the topic, the amount of useful information obtained from each participant, the use of shadowed data, and the qualitative method and study designed used” (Morse, 2000 , p. 1). Others argue that the “how many” question can be the wrong question and that the rigor of the method “depends upon developing the range of relevant conceptual categories, saturating (filling, supporting, and providing repeated evidence for) those categories,” and fully explaining the data (Charmaz, 1990 ). Indeed, there have been countless conferences and conference sessions on these debates, reports written, and myriad publications are available as well (for a compilation of debates, see Baker & Edwards, 2012 ).

Taking all of these perspectives into account, the Archives of Sexual Behavior is putting forward a policy for authors in order to have more clarity on what is expected in terms of sample size for studies drawing on grounded theory and in-depth interviews. The policy of the Archives of Sexual Behavior will be that it adheres to the recommendation that 25–30 participants is the minimum sample size required to reach saturation and redundancy in grounded theory studies that use in-depth interviews. This number is considered adequate for publications in journals because it (1) may allow for thorough examination of the characteristics that address the research questions and to distinguish conceptual categories of interest, (2) maximizes the possibility that enough data have been collected to clarify relationships between conceptual categories and identify variation in processes, and (3) maximizes the chances that negative cases and hypothetical negative cases have been explored in the data (Charmaz, 2006 ; Morse, 1994 , 1995 ).

The Journal does not want to paradoxically and rigidly quantify sample size when the endeavor at hand is qualitative in nature and the debates on this matter are complex. However, we are providing this practical guidance. We want to ensure that more of our submissions have an adequate sample size so as to get closer to reaching the goal of saturation and redundancy across relevant characteristics and concepts. The current recommendation that is being put forward does not include any comment on other qualitative methodologies, such as content and textual analysis, participant observation, focus groups, case studies, clinical cases or mixed quantitative–qualitative methods. The current recommendation also does not apply to phenomenological studies or life history approaches. The current guidance is intended to offer one clear and consistent standard for research projects that use grounded theory and draw on in-depth interviews.

Editor’s note: Dr. Dworkin is an Associate Editor of the Journal and is responsible for qualitative submissions.

Baker, S. E., & Edwards, R. (2012). How many qualitative interviews is enough? National Center for Research Methods. Available at: http://eprints.ncrm.ac.uk/2273/ .

Charmaz, K. (1990). ‘Discovering’ chronic illness: Using grounded theory. Social Science and Medicine, 30 , 1161–1172.

Article   PubMed   Google Scholar  

Charmaz, K. (2006). Constructing grounded theory: A practical guide through qualitative analysis . London: Sage Publications.

Google Scholar  

Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research . Chicago: Aldine Publishing Co.

Mason, M. (2010). Sample size and saturation in PhD studies using qualitative interviews. Forum: Qualitative Social Research, 11 (3) [Article No. 8].

Morse, J. M. (1994). Designing funded qualitative research. In N. Denzin & Y. Lincoln (Eds.), Handbook of qualitative research (pp. 220–235). Thousand Oaks, CA: Sage Publications.

Morse, J. M. (1995). The significance of saturation. Qualitative Health Research, 5 , 147–149.

Article   Google Scholar  

Morse, J. M. (2000). Determining sample size. Qualitative Health Research, 10 , 3–5.

Strauss, A. L., & Corbin, J. M. (1994). Grounded theory methodology. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (pp. 273–285). Thousand Oaks, CA: Sage Publications.

Strauss, A. L., & Corbin, J. M. (2007). Basics of qualitative research: Techniques and procedures for developing grounded theory . Thousand Oaks, CA: Sage Publications.

Zucker, K. J. (2002). From the Editor’s desk: Receiving the torch in the era of sexology’s renaissance. Archives of Sexual Behavior, 31 , 1–6.

Download references

Author information

Authors and affiliations.

Department of Social and Behavioral Sciences, University of California at San Francisco, 3333 California St., LHTS #455, San Francisco, CA, 94118, USA

Shari L. Dworkin

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Shari L. Dworkin .

Rights and permissions

Reprints and permissions

About this article

Dworkin, S.L. Sample Size Policy for Qualitative Studies Using In-Depth Interviews. Arch Sex Behav 41 , 1319–1320 (2012). https://doi.org/10.1007/s10508-012-0016-6

Download citation

Published : 12 September 2012

Issue Date : December 2012

DOI : https://doi.org/10.1007/s10508-012-0016-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Find a journal
  • Publish with us
  • Track your research

Qualitative Study

Affiliations.

  • 1 University of Nebraska Medical Center
  • 2 GDB Research and Statistical Consulting
  • 3 GDB Research and Statistical Consulting/McLaren Macomb Hospital
  • PMID: 29262162
  • Bookshelf ID: NBK470395

Qualitative research is a type of research that explores and provides deeper insights into real-world problems. Instead of collecting numerical data points or intervening or introducing treatments just like in quantitative research, qualitative research helps generate hypothenar to further investigate and understand quantitative data. Qualitative research gathers participants' experiences, perceptions, and behavior. It answers the hows and whys instead of how many or how much. It could be structured as a standalone study, purely relying on qualitative data, or part of mixed-methods research that combines qualitative and quantitative data. This review introduces the readers to some basic concepts, definitions, terminology, and applications of qualitative research.

Qualitative research, at its core, asks open-ended questions whose answers are not easily put into numbers, such as "how" and "why." Due to the open-ended nature of the research questions, qualitative research design is often not linear like quantitative design. One of the strengths of qualitative research is its ability to explain processes and patterns of human behavior that can be difficult to quantify. Phenomena such as experiences, attitudes, and behaviors can be complex to capture accurately and quantitatively. In contrast, a qualitative approach allows participants themselves to explain how, why, or what they were thinking, feeling, and experiencing at a particular time or during an event of interest. Quantifying qualitative data certainly is possible, but at its core, qualitative data is looking for themes and patterns that can be difficult to quantify, and it is essential to ensure that the context and narrative of qualitative work are not lost by trying to quantify something that is not meant to be quantified.

However, while qualitative research is sometimes placed in opposition to quantitative research, where they are necessarily opposites and therefore "compete" against each other and the philosophical paradigms associated with each other, qualitative and quantitative work are neither necessarily opposites, nor are they incompatible. While qualitative and quantitative approaches are different, they are not necessarily opposites and certainly not mutually exclusive. For instance, qualitative research can help expand and deepen understanding of data or results obtained from quantitative analysis. For example, say a quantitative analysis has determined a correlation between length of stay and level of patient satisfaction, but why does this correlation exist? This dual-focus scenario shows one way in which qualitative and quantitative research could be integrated.

Copyright © 2024, StatPearls Publishing LLC.

  • Introduction
  • Issues of Concern
  • Clinical Significance
  • Enhancing Healthcare Team Outcomes
  • Review Questions

Publication types

  • Study Guide

How many participants do I need for qualitative research?

  • Participant recruitment
  • Qualitative research

6 min read David Renwick

number of participants in qualitative research

For those new to the qualitative research space, there’s one question that’s usually pretty tough to figure out, and that’s the question of how many participants to include in a study. Regardless of whether it’s research as part of the discovery phase for a new product, or perhaps an in-depth canvas of the users of an existing service, researchers can often find it difficult to agree on the numbers. So is there an easy answer? Let’s find out.

Here, we’ll look into the right number of participants for qualitative research studies. If you want to know about participants for quantitative research, read Nielsen Norman Group’s article .

Getting the numbers right

So you need to run a series of user interviews or usability tests and aren’t sure exactly how many people you should reach out to. It can be a tricky situation – especially for those without much experience. Do you test a small selection of 1 or 2 people to make the recruitment process easier? Or, do you go big and test with a series of 10 people over the course of a month? The answer lies somewhere in between.

It’s often a good idea (for qualitative research methods like interviews and usability tests) to start with 5 participants and then scale up by a further 5 based on how complicated the subject matter is. You may also find it helpful to add additional participants if you’re new to user research or you’re working in a new area.

What you’re actually looking for here is what’s known as saturation.

Understanding saturation

Whether it’s qualitative research as part of a master’s thesis or as research for a new online dating app, saturation is the best metric you can use to identify when you’ve hit the right number of participants.

In a nutshell, saturation is when you’ve reached the point where adding further participants doesn’t give you any further insights. It’s true that you may still pick up on the occasional interesting detail, but all of your big revelations and learnings have come and gone. A good measure is to sit down after each session with a participant and analyze the number of new insights you’ve noted down.

Interestingly, in a paper titled How Many Interviews Are Enough? , authors Greg Guest, Arwen Bunce and Laura Johnson noted that saturation usually occurs with around 12 participants in homogeneous groups (meaning people in the same role at an organization, for example). However, carrying out ethnographic research on a larger domain with a diverse set of participants will almost certainly require a larger sample.

Ensuring you’ve hit the right number of participants

How do you know when you’ve reached saturation point? You have to keep conducting interviews or usability tests until you’re no longer uncovering new insights or concepts.

While this may seem to run counter to the idea of just gathering as much data from as many people as possible, there’s a strong case for focusing on a smaller group of participants. In The logic of small samples in interview-based , authors Mira Crouch and Heather McKenzie note that using fewer than 20 participants during a qualitative research study will result in better data. Why? With a smaller group, it’s easier for you (the researcher) to build strong close relationships with your participants, which in turn leads to more natural conversations and better data.

There’s also a school of thought that you should interview 5 or so people per persona. For example, if you’re working in a company that has well-defined personas, you might want to use those as a basis for your study, and then you would interview 5 people based on each persona. This maybe worth considering or particularly important when you have a product that has very distinct user groups (e.g. students and staff, teachers and parents etc).

How your domain affects sample size

The scope of the topic you’re researching will change the amount of information you’ll need to gather before you’ve hit the saturation point. Your topic is also commonly referred to as the domain.

If you’re working in quite a confined domain, for example, a single screen of a mobile app or a very specific scenario, you’ll likely find interviews with 5 participants to be perfectly fine. Moving into more complicated domains, like the entire checkout process for an online shopping app, will push up your sample size.

As Mitchel Seaman notes : “Exploring a big issue like young peoples’ opinions about healthcare coverage, a broad emotional issue like postmarital sexuality, or a poorly-understood domain for your team like mobile device use in another country can drastically increase the number of interviews you’ll want to conduct.”

In-person or remote

Does the location of your participants change the number you need for qualitative user research? Well, not really – but there are other factors to consider.

  • Budget: If you choose to conduct remote interviews/usability tests, you’ll likely find you’ve got lower costs as you won’t need to travel to your participants or have them travel to you. This also affects…
  • Participant access: Remote qualitative research can be a lifesaver when it comes to participant access. No longer are you confined to the people you have physical access to — instead you can reach out to anyone you’d like.
  • Quality: On the other hand, remote research does have its downsides. For one, you’ll likely find you’re not able to build the same kinds of relationships over the internet or phone as those in person, which in turn means you never quite get the same level of insights.

Is there value in outsourcing recruitment?

Recruitment is understandably an intensive logistical exercise with many moving parts. If you’ve ever had to recruit people for a study before, you’ll understand the need for long lead times (to ensure you have enough participants for the project) and the countless long email chains as you discuss suitable times.

Outsourcing your participant recruitment is just one way to lighten the logistical load during your research. Instead of having to go out and look for participants, you have them essentially delivered to you in the right number and with the right attributes.

We’ve got one such service at Optimal Workshop, which means it’s the perfect accompaniment if you’re also using our platform of UX tools. Read more about that here .

So that’s really most of what there is to know about participant recruitment in a qualitative research context. As we said at the start, while it can appear quite tricky to figure out exactly how many people you need to recruit, it’s actually not all that difficult in reality.

Overall, the number of participants you need for your qualitative research can depend on your project among other factors. It’s important to keep saturation in mind, as well as the locale of participants. You also need to get the most you can out of what’s available to you. Remember: Some research is better than none!

Capture, analyze and visualize your qualitative data.

Try our qualitative research tool for usability testing, interviewing and note-taking. Reframer by Optimal Workshop.

number of participants in qualitative research

Published on August 8, 2019

number of participants in qualitative research

David Renwick

David is Optimal Workshop's Content Strategist and Editor of CRUX. You can usually find him alongside one of the office dogs 🐕 (Bella, Bowie, Frida, Tana or Steezy). Connect with him on LinkedIn.

Recommended for you

number of participants in qualitative research

How to build rapport in a user interview

How exactly do you build rapport in a user interview? Here's all you need to know.

number of participants in qualitative research

13 time-saving tips and tools for conducting great user interviews

Here are some of our best time-saving tips and tools for conducting effective user interviews.

number of participants in qualitative research

Participant recruitment made easy and fast

Today we have an exciting new feature to announce. It's probably the most requested feature of all time for us.

Try Optimal Workshop tools for free

What are you looking for.

Explore all tags

Discover more from Optimal Workshop

Subscribe now to keep reading and get access to the full archive.

Type your email…

Continue reading

  • Research article
  • Open access
  • Published: 21 November 2018

Characterising and justifying sample size sufficiency in interview-based studies: systematic analysis of qualitative health research over a 15-year period

  • Konstantina Vasileiou   ORCID: orcid.org/0000-0001-5047-3920 1 ,
  • Julie Barnett 1 ,
  • Susan Thorpe 2 &
  • Terry Young 3  

BMC Medical Research Methodology volume  18 , Article number:  148 ( 2018 ) Cite this article

726k Accesses

1137 Citations

172 Altmetric

Metrics details

Choosing a suitable sample size in qualitative research is an area of conceptual debate and practical uncertainty. That sample size principles, guidelines and tools have been developed to enable researchers to set, and justify the acceptability of, their sample size is an indication that the issue constitutes an important marker of the quality of qualitative research. Nevertheless, research shows that sample size sufficiency reporting is often poor, if not absent, across a range of disciplinary fields.

A systematic analysis of single-interview-per-participant designs within three health-related journals from the disciplines of psychology, sociology and medicine, over a 15-year period, was conducted to examine whether and how sample sizes were justified and how sample size was characterised and discussed by authors. Data pertinent to sample size were extracted and analysed using qualitative and quantitative analytic techniques.

Our findings demonstrate that provision of sample size justifications in qualitative health research is limited; is not contingent on the number of interviews; and relates to the journal of publication. Defence of sample size was most frequently supported across all three journals with reference to the principle of saturation and to pragmatic considerations. Qualitative sample sizes were predominantly – and often without justification – characterised as insufficient (i.e., ‘small’) and discussed in the context of study limitations. Sample size insufficiency was seen to threaten the validity and generalizability of studies’ results, with the latter being frequently conceived in nomothetic terms.

Conclusions

We recommend, firstly, that qualitative health researchers be more transparent about evaluations of their sample size sufficiency, situating these within broader and more encompassing assessments of data adequacy . Secondly, we invite researchers critically to consider how saturation parameters found in prior methodological studies and sample size community norms might best inform, and apply to, their own project and encourage that data adequacy is best appraised with reference to features that are intrinsic to the study at hand. Finally, those reviewing papers have a vital role in supporting and encouraging transparent study-specific reporting.

Peer Review reports

Sample adequacy in qualitative inquiry pertains to the appropriateness of the sample composition and size . It is an important consideration in evaluations of the quality and trustworthiness of much qualitative research [ 1 ] and is implicated – particularly for research that is situated within a post-positivist tradition and retains a degree of commitment to realist ontological premises – in appraisals of validity and generalizability [ 2 , 3 , 4 , 5 ].

Samples in qualitative research tend to be small in order to support the depth of case-oriented analysis that is fundamental to this mode of inquiry [ 5 ]. Additionally, qualitative samples are purposive, that is, selected by virtue of their capacity to provide richly-textured information, relevant to the phenomenon under investigation. As a result, purposive sampling [ 6 , 7 ] – as opposed to probability sampling employed in quantitative research – selects ‘information-rich’ cases [ 8 ]. Indeed, recent research demonstrates the greater efficiency of purposive sampling compared to random sampling in qualitative studies [ 9 ], supporting related assertions long put forward by qualitative methodologists.

Sample size in qualitative research has been the subject of enduring discussions [ 4 , 10 , 11 ]. Whilst the quantitative research community has established relatively straightforward statistics-based rules to set sample sizes precisely, the intricacies of qualitative sample size determination and assessment arise from the methodological, theoretical, epistemological, and ideological pluralism that characterises qualitative inquiry (for a discussion focused on the discipline of psychology see [ 12 ]). This mitigates against clear-cut guidelines, invariably applied. Despite these challenges, various conceptual developments have sought to address this issue, with guidance and principles [ 4 , 10 , 11 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 ], and more recently, an evidence-based approach to sample size determination seeks to ground the discussion empirically [ 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 ].

Focusing on single-interview-per-participant qualitative designs, the present study aims to further contribute to the dialogue of sample size in qualitative research by offering empirical evidence around justification practices associated with sample size. We next review the existing conceptual and empirical literature on sample size determination.

Sample size in qualitative research: Conceptual developments and empirical investigations

Qualitative research experts argue that there is no straightforward answer to the question of ‘how many’ and that sample size is contingent on a number of factors relating to epistemological, methodological and practical issues [ 36 ]. Sandelowski [ 4 ] recommends that qualitative sample sizes are large enough to allow the unfolding of a ‘new and richly textured understanding’ of the phenomenon under study, but small enough so that the ‘deep, case-oriented analysis’ (p. 183) of qualitative data is not precluded. Morse [ 11 ] posits that the more useable data are collected from each person, the fewer participants are needed. She invites researchers to take into account parameters, such as the scope of study, the nature of topic (i.e. complexity, accessibility), the quality of data, and the study design. Indeed, the level of structure of questions in qualitative interviewing has been found to influence the richness of data generated [ 37 ], and so, requires attention; empirical research shows that open questions, which are asked later on in the interview, tend to produce richer data [ 37 ].

Beyond such guidance, specific numerical recommendations have also been proffered, often based on experts’ experience of qualitative research. For example, Green and Thorogood [ 38 ] maintain that the experience of most qualitative researchers conducting an interview-based study with a fairly specific research question is that little new information is generated after interviewing 20 people or so belonging to one analytically relevant participant ‘category’ (pp. 102–104). Ritchie et al. [ 39 ] suggest that studies employing individual interviews conduct no more than 50 interviews so that researchers are able to manage the complexity of the analytic task. Similarly, Britten [ 40 ] notes that large interview studies will often comprise of 50 to 60 people. Experts have also offered numerical guidelines tailored to different theoretical and methodological traditions and specific research approaches, e.g. grounded theory, phenomenology [ 11 , 41 ]. More recently, a quantitative tool was proposed [ 42 ] to support a priori sample size determination based on estimates of the prevalence of themes in the population. Nevertheless, this more formulaic approach raised criticisms relating to assumptions about the conceptual [ 43 ] and ontological status of ‘themes’ [ 44 ] and the linearity ascribed to the processes of sampling, data collection and data analysis [ 45 ].

In terms of principles, Lincoln and Guba [ 17 ] proposed that sample size determination be guided by the criterion of informational redundancy , that is, sampling can be terminated when no new information is elicited by sampling more units. Following the logic of informational comprehensiveness Malterud et al. [ 18 ] introduced the concept of information power as a pragmatic guiding principle, suggesting that the more information power the sample provides, the smaller the sample size needs to be, and vice versa.

Undoubtedly, the most widely used principle for determining sample size and evaluating its sufficiency is that of saturation . The notion of saturation originates in grounded theory [ 15 ] – a qualitative methodological approach explicitly concerned with empirically-derived theory development – and is inextricably linked to theoretical sampling. Theoretical sampling describes an iterative process of data collection, data analysis and theory development whereby data collection is governed by emerging theory rather than predefined characteristics of the population. Grounded theory saturation (often called theoretical saturation) concerns the theoretical categories – as opposed to data – that are being developed and becomes evident when ‘gathering fresh data no longer sparks new theoretical insights, nor reveals new properties of your core theoretical categories’ [ 46 p. 113]. Saturation in grounded theory, therefore, does not equate to the more common focus on data repetition and moves beyond a singular focus on sample size as the justification of sampling adequacy [ 46 , 47 ]. Sample size in grounded theory cannot be determined a priori as it is contingent on the evolving theoretical categories.

Saturation – often under the terms of ‘data’ or ‘thematic’ saturation – has diffused into several qualitative communities beyond its origins in grounded theory. Alongside the expansion of its meaning, being variously equated with ‘no new data’, ‘no new themes’, and ‘no new codes’, saturation has emerged as the ‘gold standard’ in qualitative inquiry [ 2 , 26 ]. Nevertheless, and as Morse [ 48 ] asserts, whilst saturation is the most frequently invoked ‘guarantee of qualitative rigor’, ‘it is the one we know least about’ (p. 587). Certainly researchers caution that saturation is less applicable to, or appropriate for, particular types of qualitative research (e.g. conversation analysis, [ 49 ]; phenomenological research, [ 50 ]) whilst others reject the concept altogether [ 19 , 51 ].

Methodological studies in this area aim to provide guidance about saturation and develop a practical application of processes that ‘operationalise’ and evidence saturation. Guest, Bunce, and Johnson [ 26 ] analysed 60 interviews and found that saturation of themes was reached by the twelfth interview. They noted that their sample was relatively homogeneous, their research aims focused, so studies of more heterogeneous samples and with a broader scope would be likely to need a larger size to achieve saturation. Extending the enquiry to multi-site, cross-cultural research, Hagaman and Wutich [ 28 ] showed that sample sizes of 20 to 40 interviews were required to achieve data saturation of meta-themes that cut across research sites. In a theory-driven content analysis, Francis et al. [ 25 ] reached data saturation at the 17th interview for all their pre-determined theoretical constructs. The authors further proposed two main principles upon which specification of saturation be based: (a) researchers should a priori specify an initial analysis sample (e.g. 10 interviews) which will be used for the first round of analysis and (b) a stopping criterion , that is, a number of interviews (e.g. 3) that needs to be further conducted, the analysis of which will not yield any new themes or ideas. For greater transparency, Francis et al. [ 25 ] recommend that researchers present cumulative frequency graphs supporting their judgment that saturation was achieved. A comparative method for themes saturation (CoMeTS) has also been suggested [ 23 ] whereby the findings of each new interview are compared with those that have already emerged and if it does not yield any new theme, the ‘saturated terrain’ is assumed to have been established. Because the order in which interviews are analysed can influence saturation thresholds depending on the richness of the data, Constantinou et al. [ 23 ] recommend reordering and re-analysing interviews to confirm saturation. Hennink, Kaiser and Marconi’s [ 29 ] methodological study sheds further light on the problem of specifying and demonstrating saturation. Their analysis of interview data showed that code saturation (i.e. the point at which no additional issues are identified) was achieved at 9 interviews, but meaning saturation (i.e. the point at which no further dimensions, nuances, or insights of issues are identified) required 16–24 interviews. Although breadth can be achieved relatively soon, especially for high-prevalence and concrete codes, depth requires additional data, especially for codes of a more conceptual nature.

Critiquing the concept of saturation, Nelson [ 19 ] proposes five conceptual depth criteria in grounded theory projects to assess the robustness of the developing theory: (a) theoretical concepts should be supported by a wide range of evidence drawn from the data; (b) be demonstrably part of a network of inter-connected concepts; (c) demonstrate subtlety; (d) resonate with existing literature; and (e) can be successfully submitted to tests of external validity.

Other work has sought to examine practices of sample size reporting and sufficiency assessment across a range of disciplinary fields and research domains, from nutrition [ 34 ] and health education [ 32 ], to education and the health sciences [ 22 , 27 ], information systems [ 30 ], organisation and workplace studies [ 33 ], human computer interaction [ 21 ], and accounting studies [ 24 ]. Others investigated PhD qualitative studies [ 31 ] and grounded theory studies [ 35 ]. Incomplete and imprecise sample size reporting is commonly pinpointed by these investigations whilst assessment and justifications of sample size sufficiency are even more sporadic.

Sobal [ 34 ] examined the sample size of qualitative studies published in the Journal of Nutrition Education over a period of 30 years. Studies that employed individual interviews ( n  = 30) had an average sample size of 45 individuals and none of these explicitly reported whether their sample size sought and/or attained saturation. A minority of articles discussed how sample-related limitations (with the latter most often concerning the type of sample, rather than the size) limited generalizability. A further systematic analysis [ 32 ] of health education research over 20 years demonstrated that interview-based studies averaged 104 participants (range 2 to 720 interviewees). However, 40% did not report the number of participants. An examination of 83 qualitative interview studies in leading information systems journals [ 30 ] indicated little defence of sample sizes on the basis of recommendations by qualitative methodologists, prior relevant work, or the criterion of saturation. Rather, sample size seemed to correlate with factors such as the journal of publication or the region of study (US vs Europe vs Asia). These results led the authors to call for more rigor in determining and reporting sample size in qualitative information systems research and to recommend optimal sample size ranges for grounded theory (i.e. 20–30 interviews) and single case (i.e. 15–30 interviews) projects.

Similarly, fewer than 10% of articles in organisation and workplace studies provided a sample size justification relating to existing recommendations by methodologists, prior relevant work, or saturation [ 33 ], whilst only 17% of focus groups studies in health-related journals provided an explanation of sample size (i.e. number of focus groups), with saturation being the most frequently invoked argument, followed by published sample size recommendations and practical reasons [ 22 ]. The notion of saturation was also invoked by 11 out of the 51 most highly cited studies that Guetterman [ 27 ] reviewed in the fields of education and health sciences, of which six were grounded theory studies, four phenomenological and one a narrative inquiry. Finally, analysing 641 interview-based articles in accounting, Dai et al. [ 24 ] called for more rigor since a significant minority of studies did not report precise sample size.

Despite increasing attention to rigor in qualitative research (e.g. [ 52 ]) and more extensive methodological and analytical disclosures that seek to validate qualitative work [ 24 ], sample size reporting and sufficiency assessment remain inconsistent and partial, if not absent, across a range of research domains.

Objectives of the present study

The present study sought to enrich existing systematic analyses of the customs and practices of sample size reporting and justification by focusing on qualitative research relating to health. Additionally, this study attempted to expand previous empirical investigations by examining how qualitative sample sizes are characterised and discussed in academic narratives. Qualitative health research is an inter-disciplinary field that due to its affiliation with medical sciences, often faces views and positions reflective of a quantitative ethos. Thus qualitative health research constitutes an emblematic case that may help to unfold underlying philosophical and methodological differences across the scientific community that are crystallised in considerations of sample size. The present research, therefore, incorporates a comparative element on the basis of three different disciplines engaging with qualitative health research: medicine, psychology, and sociology. We chose to focus our analysis on single-per-participant-interview designs as this not only presents a popular and widespread methodological choice in qualitative health research, but also as the method where consideration of sample size – defined as the number of interviewees – is particularly salient.

Study design

A structured search for articles reporting cross-sectional, interview-based qualitative studies was carried out and eligible reports were systematically reviewed and analysed employing both quantitative and qualitative analytic techniques.

We selected journals which (a) follow a peer review process, (b) are considered high quality and influential in their field as reflected in journal metrics, and (c) are receptive to, and publish, qualitative research (Additional File  1 presents the journals’ editorial positions in relation to qualitative research and sample considerations where available). Three health-related journals were chosen, each representing a different disciplinary field; the British Medical Journal (BMJ) representing medicine, the British Journal of Health Psychology (BJHP) representing psychology, and the Sociology of Health & Illness (SHI) representing sociology.

Search strategy to identify studies

Employing the search function of each individual journal, we used the terms ‘interview*’ AND ‘qualitative’ and limited the results to articles published between 1 January 2003 and 22 September 2017 (i.e. a 15-year review period).

Eligibility criteria

To be eligible for inclusion in the review, the article had to report a cross-sectional study design. Longitudinal studies were thus excluded whilst studies conducted within a broader research programme (e.g. interview studies nested in a trial, as part of a broader ethnography, as part of a longitudinal research) were included if they reported only single-time qualitative interviews. The method of data collection had to be individual, synchronous qualitative interviews (i.e. group interviews, structured interviews and e-mail interviews over a period of time were excluded), and the data had to be analysed qualitatively (i.e. studies that quantified their qualitative data were excluded). Mixed method studies and articles reporting more than one qualitative method of data collection (e.g. individual interviews and focus groups) were excluded. Figure  1 , a PRISMA flow diagram [ 53 ], shows the number of: articles obtained from the searches and screened; papers assessed for eligibility; and articles included in the review (Additional File  2 provides the full list of articles included in the review and their unique identifying code – e.g. BMJ01, BJHP02, SHI03). One review author (KV) assessed the eligibility of all papers identified from the searches. When in doubt, discussions about retaining or excluding articles were held between KV and JB in regular meetings, and decisions were jointly made.

figure 1

PRISMA flow diagram

Data extraction and analysis

A data extraction form was developed (see Additional File  3 ) recording three areas of information: (a) information about the article (e.g. authors, title, journal, year of publication etc.); (b) information about the aims of the study, the sample size and any justification for this, the participant characteristics, the sampling technique and any sample-related observations or comments made by the authors; and (c) information about the method or technique(s) of data analysis, the number of researchers involved in the analysis, the potential use of software, and any discussion around epistemological considerations. The Abstract, Methods and Discussion (and/or Conclusion) sections of each article were examined by one author (KV) who extracted all the relevant information. This was directly copied from the articles and, when appropriate, comments, notes and initial thoughts were written down.

To examine the kinds of sample size justifications provided by articles, an inductive content analysis [ 54 ] was initially conducted. On the basis of this analysis, the categories that expressed qualitatively different sample size justifications were developed.

We also extracted or coded quantitative data regarding the following aspects:

Journal and year of publication

Number of interviews

Number of participants

Presence of sample size justification(s) (Yes/No)

Presence of a particular sample size justification category (Yes/No), and

Number of sample size justifications provided

Descriptive and inferential statistical analyses were used to explore these data.

A thematic analysis [ 55 ] was then performed on all scientific narratives that discussed or commented on the sample size of the study. These narratives were evident both in papers that justified their sample size and those that did not. To identify these narratives, in addition to the methods sections, the discussion sections of the reviewed articles were also examined and relevant data were extracted and analysed.

In total, 214 articles – 21 in the BMJ, 53 in the BJHP and 140 in the SHI – were eligible for inclusion in the review. Table  1 provides basic information about the sample sizes – measured in number of interviews – of the studies reviewed across the three journals. Figure  2 depicts the number of eligible articles published each year per journal.

figure 2

The publication of qualitative studies in the BMJ was significantly reduced from 2012 onwards and this appears to coincide with the initiation of the BMJ Open to which qualitative studies were possibly directed.

Pairwise comparisons following a significant Kruskal-Wallis Footnote 2 test indicated that the studies published in the BJHP had significantly ( p  < .001) smaller samples sizes than those published either in the BMJ or the SHI. Sample sizes of BMJ and SHI articles did not differ significantly from each other.

Sample size justifications: Results from the quantitative and qualitative content analysis

Ten (47.6%) of the 21 BMJ studies, 26 (49.1%) of the 53 BJHP papers and 24 (17.1%) of the 140 SHI articles provided some sort of sample size justification. As shown in Table  2 , the majority of articles which justified their sample size provided one justification (70% of articles); fourteen studies (25%) provided two distinct justifications; one study (1.7%) gave three justifications and two studies (3.3%) expressed four distinct justifications.

There was no association between the number of interviews (i.e. sample size) conducted and the provision of a justification (rpb = .054, p  = .433). Within journals, Mann-Whitney tests indicated that sample sizes of ‘justifying’ and ‘non-justifying’ articles in the BMJ and SHI did not differ significantly from each other. In the BJHP, ‘justifying’ articles ( Mean rank  = 31.3) had significantly larger sample sizes than ‘non-justifying’ studies ( Mean rank  = 22.7; U = 237.000, p  < .05).

There was a significant association between the journal a paper was published in and the provision of a justification (χ 2 (2) = 23.83, p  < .001). BJHP studies provided a sample size justification significantly more often than would be expected ( z  = 2.9); SHI studies significantly less often ( z  = − 2.4). If an article was published in the BJHP, the odds of providing a justification were 4.8 times higher than if published in the SHI. Similarly if published in the BMJ, the odds of a study justifying its sample size were 4.5 times higher than in the SHI.

The qualitative content analysis of the scientific narratives identified eleven different sample size justifications. These are described below and illustrated with excerpts from relevant articles. By way of a summary, the frequency with which these were deployed across the three journals is indicated in Table  3 .

Saturation was the most commonly invoked principle (55.4% of all justifications) deployed by studies across all three journals to justify the sufficiency of their sample size. In the BMJ, two studies claimed that they achieved data saturation (BMJ17; BMJ18) and one article referred descriptively to achieving saturation without explicitly using the term (BMJ13). Interestingly, BMJ13 included data in the analysis beyond the point of saturation in search of ‘unusual/deviant observations’ and with a view to establishing findings consistency.

Thirty three women were approached to take part in the interview study. Twenty seven agreed and 21 (aged 21–64, median 40) were interviewed before data saturation was reached (one tape failure meant that 20 interviews were available for analysis). (BMJ17). No new topics were identified following analysis of approximately two thirds of the interviews; however, all interviews were coded in order to develop a better understanding of how characteristic the views and reported behaviours were, and also to collect further examples of unusual/deviant observations. (BMJ13).

Two articles reported pre-determining their sample size with a view to achieving data saturation (BMJ08 – see extract in section In line with existing research ; BMJ15 – see extract in section Pragmatic considerations ) without further specifying if this was achieved. One paper claimed theoretical saturation (BMJ06) conceived as being when “no further recurring themes emerging from the analysis” whilst another study argued that although the analytic categories were highly saturated, it was not possible to determine whether theoretical saturation had been achieved (BMJ04). One article (BMJ18) cited a reference to support its position on saturation.

In the BJHP, six articles claimed that they achieved data saturation (BJHP21; BJHP32; BJHP39; BJHP48; BJHP49; BJHP52) and one article stated that, given their sample size and the guidelines for achieving data saturation, it anticipated that saturation would be attained (BJHP50).

Recruitment continued until data saturation was reached, defined as the point at which no new themes emerged. (BJHP48). It has previously been recommended that qualitative studies require a minimum sample size of at least 12 to reach data saturation (Clarke & Braun, 2013; Fugard & Potts, 2014; Guest, Bunce, & Johnson, 2006) Therefore, a sample of 13 was deemed sufficient for the qualitative analysis and scale of this study. (BJHP50).

Two studies argued that they achieved thematic saturation (BJHP28 – see extract in section Sample size guidelines ; BJHP31) and one (BJHP30) article, explicitly concerned with theory development and deploying theoretical sampling, claimed both theoretical and data saturation.

The final sample size was determined by thematic saturation, the point at which new data appears to no longer contribute to the findings due to repetition of themes and comments by participants (Morse, 1995). At this point, data generation was terminated. (BJHP31).

Five studies argued that they achieved (BJHP05; BJHP33; BJHP40; BJHP13 – see extract in section Pragmatic considerations ) or anticipated (BJHP46) saturation without any further specification of the term. BJHP17 referred descriptively to a state of achieved saturation without specifically using the term. Saturation of coding , but not saturation of themes, was claimed to have been reached by one article (BJHP18). Two articles explicitly stated that they did not achieve saturation; instead claiming a level of theme completeness (BJHP27) or that themes being replicated (BJHP53) were arguments for sufficiency of their sample size.

Furthermore, data collection ceased on pragmatic grounds rather than at the point when saturation point was reached. Despite this, although nuances within sub-themes were still emerging towards the end of data analysis, the themes themselves were being replicated indicating a level of completeness. (BJHP27).

Finally, one article criticised and explicitly renounced the notion of data saturation claiming that, on the contrary, the criterion of theoretical sufficiency determined its sample size (BJHP16).

According to the original Grounded Theory texts, data collection should continue until there are no new discoveries ( i.e. , ‘data saturation’; Glaser & Strauss, 1967). However, recent revisions of this process have discussed how it is rare that data collection is an exhaustive process and researchers should rely on how well their data are able to create a sufficient theoretical account or ‘theoretical sufficiency’ (Dey, 1999). For this study, it was decided that theoretical sufficiency would guide recruitment, rather than looking for data saturation. (BJHP16).

Ten out of the 20 BJHP articles that employed the argument of saturation used one or more citations relating to this principle.

In the SHI, one article (SHI01) claimed that it achieved category saturation based on authors’ judgment.

This number was not fixed in advance, but was guided by the sampling strategy and the judgement, based on the analysis of the data, of the point at which ‘category saturation’ was achieved. (SHI01).

Three articles described a state of achieved saturation without using the term or specifying what sort of saturation they had achieved (i.e. data, theoretical, thematic saturation) (SHI04; SHI13; SHI30) whilst another four articles explicitly stated that they achieved saturation (SHI100; SHI125; SHI136; SHI137). Two papers stated that they achieved data saturation (SHI73 – see extract in section Sample size guidelines ; SHI113), two claimed theoretical saturation (SHI78; SHI115) and two referred to achieving thematic saturation (SHI87; SHI139) or to saturated themes (SHI29; SHI50).

Recruitment and analysis ceased once theoretical saturation was reached in the categories described below (Lincoln and Guba 1985). (SHI115). The respondents’ quotes drawn on below were chosen as representative, and illustrate saturated themes. (SHI50).

One article stated that thematic saturation was anticipated with its sample size (SHI94). Briefly referring to the difficulty in pinpointing achievement of theoretical saturation, SHI32 (see extract in section Richness and volume of data ) defended the sufficiency of its sample size on the basis of “the high degree of consensus [that] had begun to emerge among those interviewed”, suggesting that information from interviews was being replicated. Finally, SHI112 (see extract in section Further sampling to check findings consistency ) argued that it achieved saturation of discursive patterns . Seven of the 19 SHI articles cited references to support their position on saturation (see Additional File  4 for the full list of citations used by articles to support their position on saturation across the three journals).

Overall, it is clear that the concept of saturation encompassed a wide range of variants expressed in terms such as saturation, data saturation, thematic saturation, theoretical saturation, category saturation, saturation of coding, saturation of discursive themes, theme completeness. It is noteworthy, however, that although these various claims were sometimes supported with reference to the literature, they were not evidenced in relation to the study at hand.

Pragmatic considerations

The determination of sample size on the basis of pragmatic considerations was the second most frequently invoked argument (9.6% of all justifications) appearing in all three journals. In the BMJ, one article (BMJ15) appealed to pragmatic reasons, relating to time constraints and the difficulty to access certain study populations, to justify the determination of its sample size.

On the basis of the researchers’ previous experience and the literature, [30, 31] we estimated that recruitment of 15–20 patients at each site would achieve data saturation when data from each site were analysed separately. We set a target of seven to 10 caregivers per site because of time constraints and the anticipated difficulty of accessing caregivers at some home based care services. This gave a target sample of 75–100 patients and 35–50 caregivers overall. (BMJ15).

In the BJHP, four articles mentioned pragmatic considerations relating to time or financial constraints (BJHP27 – see extract in section Saturation ; BJHP53), the participant response rate (BJHP13), and the fixed (and thus limited) size of the participant pool from which interviewees were sampled (BJHP18).

We had aimed to continue interviewing until we had reached saturation, a point whereby further data collection would yield no further themes. In practice, the number of individuals volunteering to participate dictated when recruitment into the study ceased (15 young people, 15 parents). Nonetheless, by the last few interviews, significant repetition of concepts was occurring, suggesting ample sampling. (BJHP13).

Finally, three SHI articles explained their sample size with reference to practical aspects: time constraints and project manageability (SHI56), limited availability of respondents and project resources (SHI131), and time constraints (SHI113).

The size of the sample was largely determined by the availability of respondents and resources to complete the study. Its composition reflected, as far as practicable, our interest in how contextual factors (for example, gender relations and ethnicity) mediated the illness experience. (SHI131).

Qualities of the analysis

This sample size justification (8.4% of all justifications) was mainly employed by BJHP articles and referred to an intensive, idiographic and/or latently focused analysis, i.e. that moved beyond description. More specifically, six articles defended their sample size on the basis of an intensive analysis of transcripts and/or the idiographic focus of the study/analysis. Four of these papers (BJHP02; BJHP19; BJHP24; BJHP47) adopted an Interpretative Phenomenological Analysis (IPA) approach.

The current study employed a sample of 10 in keeping with the aim of exploring each participant’s account (Smith et al. , 1999). (BJHP19).

BJHP47 explicitly renounced the notion of saturation within an IPA approach. The other two BJHP articles conducted thematic analysis (BJHP34; BJHP38). The level of analysis – i.e. latent as opposed to a more superficial descriptive analysis – was also invoked as a justification by BJHP38 alongside the argument of an intensive analysis of individual transcripts

The resulting sample size was at the lower end of the range of sample sizes employed in thematic analysis (Braun & Clarke, 2013). This was in order to enable significant reflection, dialogue, and time on each transcript and was in line with the more latent level of analysis employed, to identify underlying ideas, rather than a more superficial descriptive analysis (Braun & Clarke, 2006). (BJHP38).

Finally, one BMJ paper (BMJ21) defended its sample size with reference to the complexity of the analytic task.

We stopped recruitment when we reached 30–35 interviews, owing to the depth and duration of interviews, richness of data, and complexity of the analytical task. (BMJ21).

Meet sampling requirements

Meeting sampling requirements (7.2% of all justifications) was another argument employed by two BMJ and four SHI articles to explain their sample size. Achieving maximum variation sampling in terms of specific interviewee characteristics determined and explained the sample size of two BMJ studies (BMJ02; BMJ16 – see extract in section Meet research design requirements ).

Recruitment continued until sampling frame requirements were met for diversity in age, sex, ethnicity, frequency of attendance, and health status. (BMJ02).

Regarding the SHI articles, two papers explained their numbers on the basis of their sampling strategy (SHI01- see extract in section Saturation ; SHI23) whilst sampling requirements that would help attain sample heterogeneity in terms of a particular characteristic of interest was cited by one paper (SHI127).

The combination of matching the recruitment sites for the quantitative research and the additional purposive criteria led to 104 phase 2 interviews (Internet (OLC): 21; Internet (FTF): 20); Gyms (FTF): 23; HIV testing (FTF): 20; HIV treatment (FTF): 20.) (SHI23). Of the fifty interviews conducted, thirty were translated from Spanish into English. These thirty, from which we draw our findings, were chosen for translation based on heterogeneity in depressive symptomology and educational attainment. (SHI127).

Finally, the pre-determination of sample size on the basis of sampling requirements was stated by one article though this was not used to justify the number of interviews (SHI10).

Sample size guidelines

Five BJHP articles (BJHP28; BJHP38 – see extract in section Qualities of the analysis ; BJHP46; BJHP47; BJHP50 – see extract in section Saturation ) and one SHI paper (SHI73) relied on citing existing sample size guidelines or norms within research traditions to determine and subsequently defend their sample size (7.2% of all justifications).

Sample size guidelines suggested a range between 20 and 30 interviews to be adequate (Creswell, 1998). Interviewer and note taker agreed that thematic saturation, the point at which no new concepts emerge from subsequent interviews (Patton, 2002), was achieved following completion of 20 interviews. (BJHP28). Interviewing continued until we deemed data saturation to have been reached (the point at which no new themes were emerging). Researchers have proposed 30 as an approximate or working number of interviews at which one could expect to be reaching theoretical saturation when using a semi-structured interview approach (Morse 2000), although this can vary depending on the heterogeneity of respondents interviewed and complexity of the issues explored. (SHI73).

In line with existing research

Sample sizes of published literature in the area of the subject matter under investigation (3.5% of all justifications) were used by 2 BMJ articles as guidance and a precedent for determining and defending their own sample size (BMJ08; BMJ15 – see extract in section Pragmatic considerations ).

We drew participants from a list of prisoners who were scheduled for release each week, sampling them until we reached the target of 35 cases, with a view to achieving data saturation within the scope of the study and sufficient follow-up interviews and in line with recent studies [8–10]. (BMJ08).

Similarly, BJHP38 (see extract in section Qualities of the analysis ) claimed that its sample size was within the range of sample sizes of published studies that use its analytic approach.

Richness and volume of data

BMJ21 (see extract in section Qualities of the analysis ) and SHI32 referred to the richness, detailed nature, and volume of data collected (2.3% of all justifications) to justify the sufficiency of their sample size.

Although there were more potential interviewees from those contacted by postcode selection, it was decided to stop recruitment after the 10th interview and focus on analysis of this sample. The material collected was considerable and, given the focused nature of the study, extremely detailed. Moreover, a high degree of consensus had begun to emerge among those interviewed, and while it is always difficult to judge at what point ‘theoretical saturation’ has been reached, or how many interviews would be required to uncover exception(s), it was felt the number was sufficient to satisfy the aims of this small in-depth investigation (Strauss and Corbin 1990). (SHI32).

Meet research design requirements

Determination of sample size so that it is in line with, and serves the requirements of, the research design (2.3% of all justifications) that the study adopted was another justification used by 2 BMJ papers (BMJ16; BMJ08 – see extract in section In line with existing research ).

We aimed for diverse, maximum variation samples [20] totalling 80 respondents from different social backgrounds and ethnic groups and those bereaved due to different types of suicide and traumatic death. We could have interviewed a smaller sample at different points in time (a qualitative longitudinal study) but chose instead to seek a broad range of experiences by interviewing those bereaved many years ago and others bereaved more recently; those bereaved in different circumstances and with different relations to the deceased; and people who lived in different parts of the UK; with different support systems and coroners’ procedures (see Tables 1 and 2 for more details). (BMJ16).

Researchers’ previous experience

The researchers’ previous experience (possibly referring to experience with qualitative research) was invoked by BMJ15 (see extract in section Pragmatic considerations ) as a justification for the determination of sample size.

Nature of study

One BJHP paper argued that the sample size was appropriate for the exploratory nature of the study (BJHP38).

A sample of eight participants was deemed appropriate because of the exploratory nature of this research and the focus on identifying underlying ideas about the topic. (BJHP38).

Further sampling to check findings consistency

Finally, SHI112 argued that once it had achieved saturation of discursive patterns, further sampling was decided and conducted to check for consistency of the findings.

Within each of the age-stratified groups, interviews were randomly sampled until saturation of discursive patterns was achieved. This resulted in a sample of 67 interviews. Once this sample had been analysed, one further interview from each age-stratified group was randomly chosen to check for consistency of the findings. Using this approach it was possible to more carefully explore children’s discourse about the ‘I’, agency, relationality and power in the thematic areas, revealing the subtle discursive variations described in this article. (SHI112).

Thematic analysis of passages discussing sample size

This analysis resulted in two overarching thematic areas; the first concerned the variation in the characterisation of sample size sufficiency, and the second related to the perceived threats deriving from sample size insufficiency.

Characterisations of sample size sufficiency

The analysis showed that there were three main characterisations of the sample size in the articles that provided relevant comments and discussion: (a) the vast majority of these qualitative studies ( n  = 42) considered their sample size as ‘small’ and this was seen and discussed as a limitation; only two articles viewed their small sample size as desirable and appropriate (b) a minority of articles ( n  = 4) proclaimed that their achieved sample size was ‘sufficient’; and (c) finally, a small group of studies ( n  = 5) characterised their sample size as ‘large’. Whilst achieving a ‘large’ sample size was sometimes viewed positively because it led to richer results, there were also occasions when a large sample size was problematic rather than desirable.

‘Small’ but why and for whom?

A number of articles which characterised their sample size as ‘small’ did so against an implicit or explicit quantitative framework of reference. Interestingly, three studies that claimed to have achieved data saturation or ‘theoretical sufficiency’ with their sample size, discussed or noted as a limitation in their discussion their ‘small’ sample size, raising the question of why, or for whom, the sample size was considered small given that the qualitative criterion of saturation had been satisfied.

The current study has a number of limitations. The sample size was small (n = 11) and, however, large enough for no new themes to emerge. (BJHP39). The study has two principal limitations. The first of these relates to the small number of respondents who took part in the study. (SHI73).

Other articles appeared to accept and acknowledge that their sample was flawed because of its small size (as well as other compositional ‘deficits’ e.g. non-representativeness, biases, self-selection) or anticipated that they might be criticized for their small sample size. It seemed that the imagined audience – perhaps reviewer or reader – was one inclined to hold the tenets of quantitative research, and certainly one to whom it was important to indicate the recognition that small samples were likely to be problematic. That one’s sample might be thought small was often construed as a limitation couched in a discourse of regret or apology.

Very occasionally, the articulation of the small size as a limitation was explicitly aligned against an espoused positivist framework and quantitative research.

This study has some limitations. Firstly, the 100 incidents sample represents a small number of the total number of serious incidents that occurs every year. 26 We sent out a nationwide invitation and do not know why more people did not volunteer for the study. Our lack of epidemiological knowledge about healthcare incidents, however, means that determining an appropriate sample size continues to be difficult. (BMJ20).

Indicative of an apparent oscillation of qualitative researchers between the different requirements and protocols demarcating the quantitative and qualitative worlds, there were a few instances of articles which briefly recognised their ‘small’ sample size as a limitation, but then defended their study on more qualitative grounds, such as their ability and success at capturing the complexity of experience and delving into the idiographic, and at generating particularly rich data.

This research, while limited in size, has sought to capture some of the complexity attached to men’s attitudes and experiences concerning incomes and material circumstances. (SHI35). Our numbers are small because negotiating access to social networks was slow and labour intensive, but our methods generated exceptionally rich data. (BMJ21). This study could be criticised for using a small and unrepresentative sample. Given that older adults have been ignored in the research concerning suntanning, fair-skinned older adults are the most likely to experience skin cancer, and women privilege appearance over health when it comes to sunbathing practices, our study offers depth and richness of data in a demographic group much in need of research attention. (SHI57).

‘Good enough’ sample sizes

Only four articles expressed some degree of confidence that their achieved sample size was sufficient. For example, SHI139, in line with the justification of thematic saturation that it offered, expressed trust in its sample size sufficiency despite the poor response rate. Similarly, BJHP04, which did not provide a sample size justification, argued that it targeted a larger sample size in order to eventually recruit a sufficient number of interviewees, due to anticipated low response rate.

Twenty-three people with type I diabetes from the target population of 133 ( i.e. 17.3%) consented to participate but four did not then respond to further contacts (total N = 19). The relatively low response rate was anticipated, due to the busy life-styles of young people in the age range, the geographical constraints, and the time required to participate in a semi-structured interview, so a larger target sample allowed a sufficient number of participants to be recruited. (BJHP04).

Two other articles (BJHP35; SHI32) linked the claimed sufficiency to the scope (i.e. ‘small, in-depth investigation’), aims and nature (i.e. ‘exploratory’) of their studies, thus anchoring their numbers to the particular context of their research. Nevertheless, claims of sample size sufficiency were sometimes undermined when they were juxtaposed with an acknowledgement that a larger sample size would be more scientifically productive.

Although our sample size was sufficient for this exploratory study, a more diverse sample including participants with lower socioeconomic status and more ethnic variation would be informative. A larger sample could also ensure inclusion of a more representative range of apps operating on a wider range of platforms. (BJHP35).

‘Large’ sample sizes - Promise or peril?

Three articles (BMJ13; BJHP05; BJHP48) which all provided the justification of saturation, characterised their sample size as ‘large’ and narrated this oversufficiency in positive terms as it allowed richer data and findings and enhanced the potential for generalisation. The type of generalisation aspired to (BJHP48) was not further specified however.

This study used rich data provided by a relatively large sample of expert informants on an important but under-researched topic. (BMJ13). Qualitative research provides a unique opportunity to understand a clinical problem from the patient’s perspective. This study had a large diverse sample, recruited through a range of locations and used in-depth interviews which enhance the richness and generalizability of the results. (BJHP48).

And whilst a ‘large’ sample size was endorsed and valued by some qualitative researchers, within the psychological tradition of IPA, a ‘large’ sample size was counter-normative and therefore needed to be justified. Four BJHP studies, all adopting IPA, expressed the appropriateness or desirability of ‘small’ sample sizes (BJHP41; BJHP45) or hastened to explain why they included a larger than typical sample size (BJHP32; BJHP47). For example, BJHP32 below provides a rationale for how an IPA study can accommodate a large sample size and how this was indeed suitable for the purposes of the particular research. To strengthen the explanation for choosing a non-normative sample size, previous IPA research citing a similar sample size approach is used as a precedent.

Small scale IPA studies allow in-depth analysis which would not be possible with larger samples (Smith et al. , 2009). (BJHP41). Although IPA generally involves intense scrutiny of a small number of transcripts, it was decided to recruit a larger diverse sample as this is the first qualitative study of this population in the United Kingdom (as far as we know) and we wanted to gain an overview. Indeed, Smith, Flowers, and Larkin (2009) agree that IPA is suitable for larger groups. However, the emphasis changes from an in-depth individualistic analysis to one in which common themes from shared experiences of a group of people can be elicited and used to understand the network of relationships between themes that emerge from the interviews. This large-scale format of IPA has been used by other researchers in the field of false-positive research. Baillie, Smith, Hewison, and Mason (2000) conducted an IPA study, with 24 participants, of ultrasound screening for chromosomal abnormality; they found that this larger number of participants enabled them to produce a more refined and cohesive account. (BJHP32).

The IPA articles found in the BJHP were the only instances where a ‘small’ sample size was advocated and a ‘large’ sample size problematized and defended. These IPA studies illustrate that the characterisation of sample size sufficiency can be a function of researchers’ theoretical and epistemological commitments rather than the result of an ‘objective’ sample size assessment.

Threats from sample size insufficiency

As shown above, the majority of articles that commented on their sample size, simultaneously characterized it as small and problematic. On those occasions that authors did not simply cite their ‘small’ sample size as a study limitation but rather continued and provided an account of how and why a small sample size was problematic, two important scientific qualities of the research seemed to be threatened: the generalizability and validity of results.

Generalizability

Those who characterised their sample as ‘small’ connected this to the limited potential for generalization of the results. Other features related to the sample – often some kind of compositional particularity – were also linked to limited potential for generalisation. Though not always explicitly articulated to what form of generalisation the articles referred to (see BJHP09), generalisation was mostly conceived in nomothetic terms, that is, it concerned the potential to draw inferences from the sample to the broader study population (‘representational generalisation’ – see BJHP31) and less often to other populations or cultures.

It must be noted that samples are small and whilst in both groups the majority of those women eligible participated, generalizability cannot be assumed. (BJHP09). The study’s limitations should be acknowledged: Data are presented from interviews with a relatively small group of participants, and thus, the views are not necessarily generalizable to all patients and clinicians. In particular, patients were only recruited from secondary care services where COFP diagnoses are typically confirmed. The sample therefore is unlikely to represent the full spectrum of patients, particularly those who are not referred to, or who have been discharged from dental services. (BJHP31).

Without explicitly using the term generalisation, two SHI articles noted how their ‘small’ sample size imposed limits on ‘the extent that we can extrapolate from these participants’ accounts’ (SHI114) or to the possibility ‘to draw far-reaching conclusions from the results’ (SHI124).

Interestingly, only a minority of articles alluded to, or invoked, a type of generalisation that is aligned with qualitative research, that is, idiographic generalisation (i.e. generalisation that can be made from and about cases [ 5 ]). These articles, all published in the discipline of sociology, defended their findings in terms of the possibility of drawing logical and conceptual inferences to other contexts and of generating understanding that has the potential to advance knowledge, despite their ‘small’ size. One article (SHI139) clearly contrasted nomothetic (statistical) generalisation to idiographic generalisation, arguing that the lack of statistical generalizability does not nullify the ability of qualitative research to still be relevant beyond the sample studied.

Further, these data do not need to be statistically generalisable for us to draw inferences that may advance medicalisation analyses (Charmaz 2014). These data may be seen as an opportunity to generate further hypotheses and are a unique application of the medicalisation framework. (SHI139). Although a small-scale qualitative study related to school counselling, this analysis can be usefully regarded as a case study of the successful utilisation of mental health-related resources by adolescents. As many of the issues explored are of relevance to mental health stigma more generally, it may also provide insights into adult engagement in services. It shows how a sociological analysis, which uses positioning theory to examine how people negotiate, partially accept and simultaneously resist stigmatisation in relation to mental health concerns, can contribute to an elucidation of the social processes and narrative constructions which may maintain as well as bridge the mental health service gap. (SHI103).

Only one article (SHI30) used the term transferability to argue for the potential of wider relevance of the results which was thought to be more the product of the composition of the sample (i.e. diverse sample), rather than the sample size.

The second major concern that arose from a ‘small’ sample size pertained to the internal validity of findings (i.e. here the term is used to denote the ‘truth’ or credibility of research findings). Authors expressed uncertainty about the degree of confidence in particular aspects or patterns of their results, primarily those that concerned some form of differentiation on the basis of relevant participant characteristics.

The information source preferred seemed to vary according to parents’ education; however, the sample size is too small to draw conclusions about such patterns. (SHI80). Although our numbers were too small to demonstrate gender differences with any certainty, it does seem that the biomedical and erotic scripts may be more common in the accounts of men and the relational script more common in the accounts of women. (SHI81).

In other instances, articles expressed uncertainty about whether their results accounted for the full spectrum and variation of the phenomenon under investigation. In other words, a ‘small’ sample size (alongside compositional ‘deficits’ such as a not statistically representative sample) was seen to threaten the ‘content validity’ of the results which in turn led to constructions of the study conclusions as tentative.

Data collection ceased on pragmatic grounds rather than when no new information appeared to be obtained ( i.e. , saturation point). As such, care should be taken not to overstate the findings. Whilst the themes from the initial interviews seemed to be replicated in the later interviews, further interviews may have identified additional themes or provided more nuanced explanations. (BJHP53). …it should be acknowledged that this study was based on a small sample of self-selected couples in enduring marriages who were not broadly representative of the population. Thus, participants may not be representative of couples that experience postnatal PTSD. It is therefore unlikely that all the key themes have been identified and explored. For example, couples who were excluded from the study because the male partner declined to participate may have been experiencing greater interpersonal difficulties. (BJHP03).

In other instances, articles attempted to preserve a degree of credibility of their results, despite the recognition that the sample size was ‘small’. Clarity and sharpness of emerging themes and alignment with previous relevant work were the arguments employed to warrant the validity of the results.

This study focused on British Chinese carers of patients with affective disorders, using a qualitative methodology to synthesise the sociocultural representations of illness within this community. Despite the small sample size, clear themes emerged from the narratives that were sufficient for this exploratory investigation. (SHI98).

The present study sought to examine how qualitative sample sizes in health-related research are characterised and justified. In line with previous studies [ 22 , 30 , 33 , 34 ] the findings demonstrate that reporting of sample size sufficiency is limited; just over 50% of articles in the BMJ and BJHP and 82% in the SHI did not provide any sample size justification. Providing a sample size justification was not related to the number of interviews conducted, but it was associated with the journal that the article was published in, indicating the influence of disciplinary or publishing norms, also reported in prior research [ 30 ]. This lack of transparency about sample size sufficiency is problematic given that most qualitative researchers would agree that it is an important marker of quality [ 56 , 57 ]. Moreover, and with the rise of qualitative research in social sciences, efforts to synthesise existing evidence and assess its quality are obstructed by poor reporting [ 58 , 59 ].

When authors justified their sample size, our findings indicate that sufficiency was mostly appraised with reference to features that were intrinsic to the study, in agreement with general advice on sample size determination [ 4 , 11 , 36 ]. The principle of saturation was the most commonly invoked argument [ 22 ] accounting for 55% of all justifications. A wide range of variants of saturation was evident corroborating the proliferation of the meaning of the term [ 49 ] and reflecting different underlying conceptualisations or models of saturation [ 20 ]. Nevertheless, claims of saturation were never substantiated in relation to procedures conducted in the study itself, endorsing similar observations in the literature [ 25 , 30 , 47 ]. Claims of saturation were sometimes supported with citations of other literature, suggesting a removal of the concept away from the characteristics of the study at hand. Pragmatic considerations, such as resource constraints or participant response rate and availability, was the second most frequently used argument accounting for approximately 10% of justifications and another 23% of justifications also represented intrinsic-to-the-study characteristics (i.e. qualities of the analysis, meeting sampling or research design requirements, richness and volume of the data obtained, nature of study, further sampling to check findings consistency).

Only, 12% of mentions of sample size justification pertained to arguments that were external to the study at hand, in the form of existing sample size guidelines and prior research that sets precedents. Whilst community norms and prior research can establish useful rules of thumb for estimating sample sizes [ 60 ] – and reveal what sizes are more likely to be acceptable within research communities – researchers should avoid adopting these norms uncritically, especially when such guidelines [e.g. 30 , 35 ], might be based on research that does not provide adequate evidence of sample size sufficiency. Similarly, whilst methodological research that seeks to demonstrate the achievement of saturation is invaluable since it explicates the parameters upon which saturation is contingent and indicates when a research project is likely to require a smaller or a larger sample [e.g. 29 ], specific numbers at which saturation was achieved within these projects cannot be routinely extrapolated for other projects. We concur with existing views [ 11 , 36 ] that the consideration of the characteristics of the study at hand, such as the epistemological and theoretical approach, the nature of the phenomenon under investigation, the aims and scope of the study, the quality and richness of data, or the researcher’s experience and skills of conducting qualitative research, should be the primary guide in determining sample size and assessing its sufficiency.

Moreover, although numbers in qualitative research are not unimportant [ 61 ], sample size should not be considered alone but be embedded in the more encompassing examination of data adequacy [ 56 , 57 ]. Erickson’s [ 62 ] dimensions of ‘evidentiary adequacy’ are useful here. He explains the concept in terms of adequate amounts of evidence, adequate variety in kinds of evidence, adequate interpretive status of evidence, adequate disconfirming evidence, and adequate discrepant case analysis. All dimensions might not be relevant across all qualitative research designs, but this illustrates the thickness of the concept of data adequacy, taking it beyond sample size.

The present research also demonstrated that sample sizes were commonly seen as ‘small’ and insufficient and discussed as limitation. Often unjustified (and in two cases incongruent with their own claims of saturation) these findings imply that sample size in qualitative health research is often adversely judged (or expected to be judged) against an implicit, yet omnipresent, quasi-quantitative standpoint. Indeed there were a few instances in our data where authors appeared, possibly in response to reviewers, to resist to some sort of quantification of their results. This implicit reference point became more apparent when authors discussed the threats deriving from an insufficient sample size. Whilst the concerns about internal validity might be legitimate to the extent that qualitative research projects, which are broadly related to realism, are set to examine phenomena in sufficient breadth and depth, the concerns around generalizability revealed a conceptualisation that is not compatible with purposive sampling. The limited potential for generalisation, as a result of a small sample size, was often discussed in nomothetic, statistical terms. Only occasionally was analytic or idiographic generalisation invoked to warrant the value of the study’s findings [ 5 , 17 ].

Strengths and limitations of the present study

We note, first, the limited number of health-related journals reviewed, so that only a ‘snapshot’ of qualitative health research has been captured. Examining additional disciplines (e.g. nursing sciences) as well as inter-disciplinary journals would add to the findings of this analysis. Nevertheless, our study is the first to provide some comparative insights on the basis of disciplines that are differently attached to the legacy of positivism and analysed literature published over a lengthy period of time (15 years). Guetterman [ 27 ] also examined health-related literature but this analysis was restricted to 26 most highly cited articles published over a period of five years whilst Carlsen and Glenton’s [ 22 ] study concentrated on focus groups health research. Moreover, although it was our intention to examine sample size justification in relation to the epistemological and theoretical positions of articles, this proved to be challenging largely due to absence of relevant information, or the difficulty into discerning clearly articles’ positions [ 63 ] and classifying them under specific approaches (e.g. studies often combined elements from different theoretical and epistemological traditions). We believe that such an analysis would yield useful insights as it links the methodological issue of sample size to the broader philosophical stance of the research. Despite these limitations, the analysis of the characterisation of sample size and of the threats seen to accrue from insufficient sample size, enriches our understanding of sample size (in)sufficiency argumentation by linking it to other features of the research. As the peer-review process becomes increasingly public, future research could usefully examine how reporting around sample size sufficiency and data adequacy might be influenced by the interactions between authors and reviewers.

The past decade has seen a growing appetite in qualitative research for an evidence-based approach to sample size determination and to evaluations of the sufficiency of sample size. Despite the conceptual and methodological developments in the area, the findings of the present study confirm previous studies in concluding that appraisals of sample size sufficiency are either absent or poorly substantiated. To ensure and maintain high quality research that will encourage greater appreciation of qualitative work in health-related sciences [ 64 ], we argue that qualitative researchers should be more transparent and thorough in their evaluation of sample size as part of their appraisal of data adequacy. We would encourage the practice of appraising sample size sufficiency with close reference to the study at hand and would thus caution against responding to the growing methodological research in this area with a decontextualised application of sample size numerical guidelines, norms and principles. Although researchers might find sample size community norms serve as useful rules of thumb, we recommend methodological knowledge is used to critically consider how saturation and other parameters that affect sample size sufficiency pertain to the specifics of the particular project. Those reviewing papers have a vital role in encouraging transparent study-specific reporting. The review process should support authors to exercise nuanced judgments in decisions about sample size determination in the context of the range of factors that influence sample size sufficiency and the specifics of a particular study. In light of the growing methodological evidence in the area, transparent presentation of such evidence-based judgement is crucial and in time should surely obviate the seemingly routine practice of citing the ‘small’ size of qualitative samples among the study limitations.

A non-parametric test of difference for independent samples was performed since the variable number of interviews violated assumptions of normality according to the standardized scores of skewness and kurtosis (BMJ: z skewness = 3.23, z kurtosis = 1.52; BJHP: z skewness = 4.73, z kurtosis = 4.85; SHI: z skewness = 12.04, z kurtosis = 21.72) and the Shapiro-Wilk test of normality ( p  < .001).

Abbreviations

British Journal of Health Psychology

British Medical Journal

Interpretative Phenomenological Analysis

Sociology of Health & Illness

Spencer L, Ritchie J, Lewis J, Dillon L. Quality in qualitative evaluation: a framework for assessing research evidence. National Centre for Social Research 2003 https://www.heacademy.ac.uk/system/files/166_policy_hub_a_quality_framework.pdf Accessed 11 May 2018.

Fusch PI, Ness LR. Are we there yet? Data saturation in qualitative research Qual Rep. 2015;20(9):1408–16.

Google Scholar  

Robinson OC. Sampling in interview-based qualitative research: a theoretical and practical guide. Qual Res Psychol. 2014;11(1):25–41.

Article   Google Scholar  

Sandelowski M. Sample size in qualitative research. Res Nurs Health. 1995;18(2):179–83.

Article   CAS   Google Scholar  

Sandelowski M. One is the liveliest number: the case orientation of qualitative research. Res Nurs Health. 1996;19(6):525–9.

Luborsky MR, Rubinstein RL. Sampling in qualitative research: rationale, issues. and methods Res Aging. 1995;17(1):89–113.

Marshall MN. Sampling for qualitative research. Fam Pract. 1996;13(6):522–6.

Patton MQ. Qualitative evaluation and research methods. 2nd ed. Newbury Park, CA: Sage; 1990.

van Rijnsoever FJ. (I Can’t get no) saturation: a simulation and guidelines for sample sizes in qualitative research. PLoS One. 2017;12(7):e0181689.

Morse JM. The significance of saturation. Qual Health Res. 1995;5(2):147–9.

Morse JM. Determining sample size. Qual Health Res. 2000;10(1):3–5.

Gergen KJ, Josselson R, Freeman M. The promises of qualitative inquiry. Am Psychol. 2015;70(1):1–9.

Borsci S, Macredie RD, Barnett J, Martin J, Kuljis J, Young T. Reviewing and extending the five-user assumption: a grounded procedure for interaction evaluation. ACM Trans Comput Hum Interact. 2013;20(5):29.

Borsci S, Macredie RD, Martin JL, Young T. How many testers are needed to assure the usability of medical devices? Expert Rev Med Devices. 2014;11(5):513–25.

Glaser BG, Strauss AL. The discovery of grounded theory: strategies for qualitative research. Chicago, IL: Aldine; 1967.

Kerr C, Nixon A, Wild D. Assessing and demonstrating data saturation in qualitative inquiry supporting patient-reported outcomes research. Expert Rev Pharmacoecon Outcomes Res. 2010;10(3):269–81.

Lincoln YS, Guba EG. Naturalistic inquiry. London: Sage; 1985.

Book   Google Scholar  

Malterud K, Siersma VD, Guassora AD. Sample size in qualitative interview studies: guided by information power. Qual Health Res. 2015;26:1753–60.

Nelson J. Using conceptual depth criteria: addressing the challenge of reaching saturation in qualitative research. Qual Res. 2017;17(5):554–70.

Saunders B, Sim J, Kingstone T, Baker S, Waterfield J, Bartlam B, et al. Saturation in qualitative research: exploring its conceptualization and operationalization. Qual Quant. 2017. https://doi.org/10.1007/s11135-017-0574-8 .

Caine K. Local standards for sample size at CHI. In Proceedings of the 2016 CHI conference on human factors in computing systems. 2016;981–992. ACM.

Carlsen B, Glenton C. What about N? A methodological study of sample-size reporting in focus group studies. BMC Med Res Methodol. 2011;11(1):26.

Constantinou CS, Georgiou M, Perdikogianni M. A comparative method for themes saturation (CoMeTS) in qualitative interviews. Qual Res. 2017;17(5):571–88.

Dai NT, Free C, Gendron Y. Interview-based research in accounting 2000–2014: a review. November 2016. https://ssrn.com/abstract=2711022 or https://doi.org/10.2139/ssrn.2711022 . Accessed 17 May 2018.

Francis JJ, Johnston M, Robertson C, Glidewell L, Entwistle V, Eccles MP, et al. What is an adequate sample size? Operationalising data saturation for theory-based interview studies. Psychol Health. 2010;25(10):1229–45.

Guest G, Bunce A, Johnson L. How many interviews are enough? An experiment with data saturation and variability. Field Methods. 2006;18(1):59–82.

Guetterman TC. Descriptions of sampling practices within five approaches to qualitative research in education and the health sciences. Forum Qual Soc Res. 2015;16(2):25. http://nbn-resolving.de/urn:nbn:de:0114-fqs1502256 . Accessed 17 May 2018.

Hagaman AK, Wutich A. How many interviews are enough to identify metathemes in multisited and cross-cultural research? Another perspective on guest, bunce, and Johnson’s (2006) landmark study. Field Methods. 2017;29(1):23–41.

Hennink MM, Kaiser BN, Marconi VC. Code saturation versus meaning saturation: how many interviews are enough? Qual Health Res. 2017;27(4):591–608.

Marshall B, Cardon P, Poddar A, Fontenot R. Does sample size matter in qualitative research?: a review of qualitative interviews in IS research. J Comput Inform Syst. 2013;54(1):11–22.

Mason M. Sample size and saturation in PhD studies using qualitative interviews. Forum Qual Soc Res 2010;11(3):8. http://nbn-resolving.de/urn:nbn:de:0114-fqs100387 . Accessed 17 May 2018.

Safman RM, Sobal J. Qualitative sample extensiveness in health education research. Health Educ Behav. 2004;31(1):9–21.

Saunders MN, Townsend K. Reporting and justifying the number of interview participants in organization and workplace research. Br J Manag. 2016;27(4):836–52.

Sobal J. 2001. Sample extensiveness in qualitative nutrition education research. J Nutr Educ. 2001;33(4):184–92.

Thomson SB. 2010. Sample size and grounded theory. JOAAG. 2010;5(1). http://www.joaag.com/uploads/5_1__Research_Note_1_Thomson.pdf . Accessed 17 May 2018.

Baker SE, Edwards R. How many qualitative interviews is enough?: expert voices and early career reflections on sampling and cases in qualitative research. National Centre for Research Methods Review Paper. 2012; http://eprints.ncrm.ac.uk/2273/4/how_many_interviews.pdf . Accessed 17 May 2018.

Ogden J, Cornwell D. The role of topic, interviewee, and question in predicting rich interview data in the field of health research. Sociol Health Illn. 2010;32(7):1059–71.

Green J, Thorogood N. Qualitative methods for health research. London: Sage; 2004.

Ritchie J, Lewis J, Elam G. Designing and selecting samples. In: Ritchie J, Lewis J, editors. Qualitative research practice: a guide for social science students and researchers. London: Sage; 2003. p. 77–108.

Britten N. Qualitative research: qualitative interviews in medical research. BMJ. 1995;311(6999):251–3.

Creswell JW. Qualitative inquiry and research design: choosing among five approaches. 2nd ed. London: Sage; 2007.

Fugard AJ, Potts HW. Supporting thinking on sample sizes for thematic analyses: a quantitative tool. Int J Soc Res Methodol. 2015;18(6):669–84.

Emmel N. Themes, variables, and the limits to calculating sample size in qualitative research: a response to Fugard and Potts. Int J Soc Res Methodol. 2015;18(6):685–6.

Braun V, Clarke V. (Mis) conceptualising themes, thematic analysis, and other problems with Fugard and Potts’ (2015) sample-size tool for thematic analysis. Int J Soc Res Methodol. 2016;19(6):739–43.

Hammersley M. Sampling and thematic analysis: a response to Fugard and Potts. Int J Soc Res Methodol. 2015;18(6):687–8.

Charmaz K. Constructing grounded theory: a practical guide through qualitative analysis. London: Sage; 2006.

Bowen GA. Naturalistic inquiry and the saturation concept: a research note. Qual Res. 2008;8(1):137–52.

Morse JM. Data were saturated. Qual Health Res. 2015;25(5):587–8.

O’Reilly M, Parker N. ‘Unsatisfactory saturation’: a critical exploration of the notion of saturated sample sizes in qualitative research. Qual Res. 2013;13(2):190–7.

Manen M, Higgins I, Riet P. A conversation with max van Manen on phenomenology in its original sense. Nurs Health Sci. 2016;18(1):4–7.

Dey I. Grounding grounded theory. San Francisco, CA: Academic Press; 1999.

Hays DG, Wood C, Dahl H, Kirk-Jenkins A. Methodological rigor in journal of counseling & development qualitative research articles: a 15-year review. J Couns Dev. 2016;94(2):172–83.

Moher D, Liberati A, Tetzlaff J, Altman DG, Prisma Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med 2009; 6(7): e1000097.

Hsieh HF, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res. 2005;15(9):1277–88.

Boyatzis RE. Transforming qualitative information: thematic analysis and code development. Thousand Oaks, CA: Sage; 1998.

Levitt HM, Motulsky SL, Wertz FJ, Morrow SL, Ponterotto JG. Recommendations for designing and reviewing qualitative research in psychology: promoting methodological integrity. Qual Psychol. 2017;4(1):2–22.

Morrow SL. Quality and trustworthiness in qualitative research in counseling psychology. J Couns Psychol. 2005;52(2):250–60.

Barroso J, Sandelowski M. Sample reporting in qualitative studies of women with HIV infection. Field Methods. 2003;15(4):386–404.

Glenton C, Carlsen B, Lewin S, Munthe-Kaas H, Colvin CJ, Tunçalp Ö, et al. Applying GRADE-CERQual to qualitative evidence synthesis findings—paper 5: how to assess adequacy of data. Implement Sci. 2018;13(Suppl 1):14.

Onwuegbuzie AJ. Leech NL. A call for qualitative power analyses. Qual Quant. 2007;41(1):105–21.

Sandelowski M. Real qualitative researchers do not count: the use of numbers in qualitative research. Res Nurs Health. 2001;24(3):230–40.

Erickson F. Qualitative methods in research on teaching. In: Wittrock M, editor. Handbook of research on teaching. 3rd ed. New York: Macmillan; 1986. p. 119–61.

Bradbury-Jones C, Taylor J, Herber O. How theory is used and articulated in qualitative research: development of a new typology. Soc Sci Med. 2014;120:135–41.

Greenhalgh T, Annandale E, Ashcroft R, Barlow J, Black N, Bleakley A, et al. An open letter to the BMJ editors on qualitative research. BMJ. 2016;i563:352.

Download references

Acknowledgments

We would like to thank Dr. Paula Smith and Katharine Lee for their comments on a previous draft of this paper as well as Natalie Ann Mitchell and Meron Teferra for assisting us with data extraction.

This research was initially conceived of and partly conducted with financial support from the Multidisciplinary Assessment of Technology Centre for Healthcare (MATCH) programme (EP/F063822/1 and EP/G012393/1). The research continued and was completed independent of any support. The funding body did not have any role in the study design, the collection, analysis and interpretation of the data, in the writing of the paper, and in the decision to submit the manuscript for publication. The views expressed are those of the authors alone.

Availability of data and materials

Supporting data can be accessed in the original publications. Additional File 2 lists all eligible studies that were included in the present analysis.

Author information

Authors and affiliations.

Department of Psychology, University of Bath, Building 10 West, Claverton Down, Bath, BA2 7AY, UK

Konstantina Vasileiou & Julie Barnett

School of Psychology, Newcastle University, Ridley Building 1, Queen Victoria Road, Newcastle upon Tyne, NE1 7RU, UK

Susan Thorpe

Department of Computer Science, Brunel University London, Wilfred Brown Building 108, Uxbridge, UB8 3PH, UK

Terry Young

You can also search for this author in PubMed   Google Scholar

Contributions

JB and TY conceived the study; KV, JB, and TY designed the study; KV identified the articles and extracted the data; KV and JB assessed eligibility of articles; KV, JB, ST, and TY contributed to the analysis of the data, discussed the findings and early drafts of the paper; KV developed the final manuscript; KV, JB, ST, and TY read and approved the manuscript.

Corresponding author

Correspondence to Konstantina Vasileiou .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

Terry Young is an academic who undertakes research and occasional consultancy in the areas of health technology assessment, information systems, and service design. He is unaware of any direct conflict of interest with respect to this paper. All other authors have no competing interests to declare.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional Files

Additional file 1:.

Editorial positions on qualitative research and sample considerations (where available). (DOCX 12 kb)

Additional File 2:

List of eligible articles included in the review ( N  = 214). (DOCX 38 kb)

Additional File 3:

Data Extraction Form. (DOCX 15 kb)

Additional File 4:

Citations used by articles to support their position on saturation. (DOCX 14 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Vasileiou, K., Barnett, J., Thorpe, S. et al. Characterising and justifying sample size sufficiency in interview-based studies: systematic analysis of qualitative health research over a 15-year period. BMC Med Res Methodol 18 , 148 (2018). https://doi.org/10.1186/s12874-018-0594-7

Download citation

Received : 22 May 2018

Accepted : 29 October 2018

Published : 21 November 2018

DOI : https://doi.org/10.1186/s12874-018-0594-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Sample size
  • Sample size justification
  • Sample size characterisation
  • Data adequacy
  • Qualitative health research
  • Qualitative interviews
  • Systematic analysis

BMC Medical Research Methodology

ISSN: 1471-2288

number of participants in qualitative research

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 05 October 2018

Interviews and focus groups in qualitative research: an update for the digital age

  • P. Gill 1 &
  • J. Baillie 2  

British Dental Journal volume  225 ,  pages 668–672 ( 2018 ) Cite this article

28k Accesses

48 Citations

20 Altmetric

Metrics details

Highlights that qualitative research is used increasingly in dentistry. Interviews and focus groups remain the most common qualitative methods of data collection.

Suggests the advent of digital technologies has transformed how qualitative research can now be undertaken.

Suggests interviews and focus groups can offer significant, meaningful insight into participants' experiences, beliefs and perspectives, which can help to inform developments in dental practice.

Qualitative research is used increasingly in dentistry, due to its potential to provide meaningful, in-depth insights into participants' experiences, perspectives, beliefs and behaviours. These insights can subsequently help to inform developments in dental practice and further related research. The most common methods of data collection used in qualitative research are interviews and focus groups. While these are primarily conducted face-to-face, the ongoing evolution of digital technologies, such as video chat and online forums, has further transformed these methods of data collection. This paper therefore discusses interviews and focus groups in detail, outlines how they can be used in practice, how digital technologies can further inform the data collection process, and what these methods can offer dentistry.

You have full access to this article via your institution.

Similar content being viewed by others

number of participants in qualitative research

Determinants of behaviour and their efficacy as targets of behavioural change interventions

number of participants in qualitative research

Interviews in the social sciences

number of participants in qualitative research

Toolbox of individual-level interventions against online misinformation

Introduction.

Traditionally, research in dentistry has primarily been quantitative in nature. 1 However, in recent years, there has been a growing interest in qualitative research within the profession, due to its potential to further inform developments in practice, policy, education and training. Consequently, in 2008, the British Dental Journal (BDJ) published a four paper qualitative research series, 2 , 3 , 4 , 5 to help increase awareness and understanding of this particular methodological approach.

Since the papers were originally published, two scoping reviews have demonstrated the ongoing proliferation in the use of qualitative research within the field of oral healthcare. 1 , 6 To date, the original four paper series continue to be well cited and two of the main papers remain widely accessed among the BDJ readership. 2 , 3 The potential value of well-conducted qualitative research to evidence-based practice is now also widely recognised by service providers, policy makers, funding bodies and those who commission, support and use healthcare research.

Besides increasing standalone use, qualitative methods are now also routinely incorporated into larger mixed method study designs, such as clinical trials, as they can offer additional, meaningful insights into complex problems that simply could not be provided by quantitative methods alone. Qualitative methods can also be used to further facilitate in-depth understanding of important aspects of clinical trial processes, such as recruitment. For example, Ellis et al . investigated why edentulous older patients, dissatisfied with conventional dentures, decline implant treatment, despite its established efficacy, and frequently refuse to participate in related randomised clinical trials, even when financial constraints are removed. 7 Through the use of focus groups in Canada and the UK, the authors found that fears of pain and potential complications, along with perceived embarrassment, exacerbated by age, are common reasons why older patients typically refuse dental implants. 7

The last decade has also seen further developments in qualitative research, due to the ongoing evolution of digital technologies. These developments have transformed how researchers can access and share information, communicate and collaborate, recruit and engage participants, collect and analyse data and disseminate and translate research findings. 8 Where appropriate, such technologies are therefore capable of extending and enhancing how qualitative research is undertaken. 9 For example, it is now possible to collect qualitative data via instant messaging, email or online/video chat, using appropriate online platforms.

These innovative approaches to research are therefore cost-effective, convenient, reduce geographical constraints and are often useful for accessing 'hard to reach' participants (for example, those who are immobile or socially isolated). 8 , 9 However, digital technologies are still relatively new and constantly evolving and therefore present a variety of pragmatic and methodological challenges. Furthermore, given their very nature, their use in many qualitative studies and/or with certain participant groups may be inappropriate and should therefore always be carefully considered. While it is beyond the scope of this paper to provide a detailed explication regarding the use of digital technologies in qualitative research, insight is provided into how such technologies can be used to facilitate the data collection process in interviews and focus groups.

In light of such developments, it is perhaps therefore timely to update the main paper 3 of the original BDJ series. As with the previous publications, this paper has been purposely written in an accessible style, to enhance readability, particularly for those who are new to qualitative research. While the focus remains on the most common qualitative methods of data collection – interviews and focus groups – appropriate revisions have been made to provide a novel perspective, and should therefore be helpful to those who would like to know more about qualitative research. This paper specifically focuses on undertaking qualitative research with adult participants only.

Overview of qualitative research

Qualitative research is an approach that focuses on people and their experiences, behaviours and opinions. 10 , 11 The qualitative researcher seeks to answer questions of 'how' and 'why', providing detailed insight and understanding, 11 which quantitative methods cannot reach. 12 Within qualitative research, there are distinct methodologies influencing how the researcher approaches the research question, data collection and data analysis. 13 For example, phenomenological studies focus on the lived experience of individuals, explored through their description of the phenomenon. Ethnographic studies explore the culture of a group and typically involve the use of multiple methods to uncover the issues. 14

While methodology is the 'thinking tool', the methods are the 'doing tools'; 13 the ways in which data are collected and analysed. There are multiple qualitative data collection methods, including interviews, focus groups, observations, documentary analysis, participant diaries, photography and videography. Two of the most commonly used qualitative methods are interviews and focus groups, which are explored in this article. The data generated through these methods can be analysed in one of many ways, according to the methodological approach chosen. A common approach is thematic data analysis, involving the identification of themes and subthemes across the data set. Further information on approaches to qualitative data analysis has been discussed elsewhere. 1

Qualitative research is an evolving and adaptable approach, used by different disciplines for different purposes. Traditionally, qualitative data, specifically interviews, focus groups and observations, have been collected face-to-face with participants. In more recent years, digital technologies have contributed to the ongoing evolution of qualitative research. Digital technologies offer researchers different ways of recruiting participants and collecting data, and offer participants opportunities to be involved in research that is not necessarily face-to-face.

Research interviews are a fundamental qualitative research method 15 and are utilised across methodological approaches. Interviews enable the researcher to learn in depth about the perspectives, experiences, beliefs and motivations of the participant. 3 , 16 Examples include, exploring patients' perspectives of fear/anxiety triggers in dental treatment, 17 patients' experiences of oral health and diabetes, 18 and dental students' motivations for their choice of career. 19

Interviews may be structured, semi-structured or unstructured, 3 according to the purpose of the study, with less structured interviews facilitating a more in depth and flexible interviewing approach. 20 Structured interviews are similar to verbal questionnaires and are used if the researcher requires clarification on a topic; however they produce less in-depth data about a participant's experience. 3 Unstructured interviews may be used when little is known about a topic and involves the researcher asking an opening question; 3 the participant then leads the discussion. 20 Semi-structured interviews are commonly used in healthcare research, enabling the researcher to ask predetermined questions, 20 while ensuring the participant discusses issues they feel are important.

Interviews can be undertaken face-to-face or using digital methods when the researcher and participant are in different locations. Audio-recording the interview, with the consent of the participant, is essential for all interviews regardless of the medium as it enables accurate transcription; the process of turning the audio file into a word-for-word transcript. This transcript is the data, which the researcher then analyses according to the chosen approach.

Types of interview

Qualitative studies often utilise one-to-one, face-to-face interviews with research participants. This involves arranging a mutually convenient time and place to meet the participant, signing a consent form and audio-recording the interview. However, digital technologies have expanded the potential for interviews in research, enabling individuals to participate in qualitative research regardless of location.

Telephone interviews can be a useful alternative to face-to-face interviews and are commonly used in qualitative research. They enable participants from different geographical areas to participate and may be less onerous for participants than meeting a researcher in person. 15 A qualitative study explored patients' perspectives of dental implants and utilised telephone interviews due to the quality of the data that could be yielded. 21 The researcher needs to consider how they will audio record the interview, which can be facilitated by purchasing a recorder that connects directly to the telephone. One potential disadvantage of telephone interviews is the inability of the interviewer and researcher to see each other. This is resolved using software for audio and video calls online – such as Skype – to conduct interviews with participants in qualitative studies. Advantages of this approach include being able to see the participant if video calls are used, enabling observation of non-verbal communication, and the software can be free to use. However, participants are required to have a device and internet connection, as well as being computer literate, potentially limiting who can participate in the study. One qualitative study explored the role of dental hygienists in reducing oral health disparities in Canada. 22 The researcher conducted interviews using Skype, which enabled dental hygienists from across Canada to be interviewed within the research budget, accommodating the participants' schedules. 22

A less commonly used approach to qualitative interviews is the use of social virtual worlds. A qualitative study accessed a social virtual world – Second Life – to explore the health literacy skills of individuals who use social virtual worlds to access health information. 23 The researcher created an avatar and interview room, and undertook interviews with participants using voice and text methods. 23 This approach to recruitment and data collection enables individuals from diverse geographical locations to participate, while remaining anonymous if they wish. Furthermore, for interviews conducted using text methods, transcription of the interview is not required as the researcher can save the written conversation with the participant, with the participant's consent. However, the researcher and participant need to be familiar with how the social virtual world works to engage in an interview this way.

Conducting an interview

Ensuring informed consent before any interview is a fundamental aspect of the research process. Participants in research must be afforded autonomy and respect; consent should be informed and voluntary. 24 Individuals should have the opportunity to read an information sheet about the study, ask questions, understand how their data will be stored and used, and know that they are free to withdraw at any point without reprisal. The qualitative researcher should take written consent before undertaking the interview. In a face-to-face interview, this is straightforward: the researcher and participant both sign copies of the consent form, keeping one each. However, this approach is less straightforward when the researcher and participant do not meet in person. A recent protocol paper outlined an approach for taking consent for telephone interviews, which involved: audio recording the participant agreeing to each point on the consent form; the researcher signing the consent form and keeping a copy; and posting a copy to the participant. 25 This process could be replicated in other interview studies using digital methods.

There are advantages and disadvantages of using face-to-face and digital methods for research interviews. Ultimately, for both approaches, the quality of the interview is determined by the researcher. 16 Appropriate training and preparation are thus required. Healthcare professionals can use their interpersonal communication skills when undertaking a research interview, particularly questioning, listening and conversing. 3 However, the purpose of an interview is to gain information about the study topic, 26 rather than offering help and advice. 3 The researcher therefore needs to listen attentively to participants, enabling them to describe their experience without interruption. 3 The use of active listening skills also help to facilitate the interview. 14 Spradley outlined elements and strategies for research interviews, 27 which are a useful guide for qualitative researchers:

Greeting and explaining the project/interview

Asking descriptive (broad), structural (explore response to descriptive) and contrast (difference between) questions

Asymmetry between the researcher and participant talking

Expressing interest and cultural ignorance

Repeating, restating and incorporating the participant's words when asking questions

Creating hypothetical situations

Asking friendly questions

Knowing when to leave.

For semi-structured interviews, a topic guide (also called an interview schedule) is used to guide the content of the interview – an example of a topic guide is outlined in Box 1 . The topic guide, usually based on the research questions, existing literature and, for healthcare professionals, their clinical experience, is developed by the research team. The topic guide should include open ended questions that elicit in-depth information, and offer participants the opportunity to talk about issues important to them. This is vital in qualitative research where the researcher is interested in exploring the experiences and perspectives of participants. It can be useful for qualitative researchers to pilot the topic guide with the first participants, 10 to ensure the questions are relevant and understandable, and amending the questions if required.

Regardless of the medium of interview, the researcher must consider the setting of the interview. For face-to-face interviews, this could be in the participant's home, in an office or another mutually convenient location. A quiet location is preferable to promote confidentiality, enable the researcher and participant to concentrate on the conversation, and to facilitate accurate audio-recording of the interview. For interviews using digital methods the same principles apply: a quiet, private space where the researcher and participant feel comfortable and confident to participate in an interview.

Box 1: Example of a topic guide

Study focus: Parents' experiences of brushing their child's (aged 0–5) teeth

1. Can you tell me about your experience of cleaning your child's teeth?

How old was your child when you started cleaning their teeth?

Why did you start cleaning their teeth at that point?

How often do you brush their teeth?

What do you use to brush their teeth and why?

2. Could you explain how you find cleaning your child's teeth?

Do you find anything difficult?

What makes cleaning their teeth easier for you?

3. How has your experience of cleaning your child's teeth changed over time?

Has it become easier or harder?

Have you changed how often and how you clean their teeth? If so, why?

4. Could you describe how your child finds having their teeth cleaned?

What do they enjoy about having their teeth cleaned?

Is there anything they find upsetting about having their teeth cleaned?

5. Where do you look for information/advice about cleaning your child's teeth?

What did your health visitor tell you about cleaning your child's teeth? (If anything)

What has the dentist told you about caring for your child's teeth? (If visited)

Have any family members given you advice about how to clean your child's teeth? If so, what did they tell you? Did you follow their advice?

6. Is there anything else you would like to discuss about this?

Focus groups

A focus group is a moderated group discussion on a pre-defined topic, for research purposes. 28 , 29 While not aligned to a particular qualitative methodology (for example, grounded theory or phenomenology) as such, focus groups are used increasingly in healthcare research, as they are useful for exploring collective perspectives, attitudes, behaviours and experiences. Consequently, they can yield rich, in-depth data and illuminate agreement and inconsistencies 28 within and, where appropriate, between groups. Examples include public perceptions of dental implants and subsequent impact on help-seeking and decision making, 30 and general dental practitioners' views on patient safety in dentistry. 31

Focus groups can be used alone or in conjunction with other methods, such as interviews or observations, and can therefore help to confirm, extend or enrich understanding and provide alternative insights. 28 The social interaction between participants often results in lively discussion and can therefore facilitate the collection of rich, meaningful data. However, they are complex to organise and manage, due to the number of participants, and may also be inappropriate for exploring particularly sensitive issues that many participants may feel uncomfortable about discussing in a group environment.

Focus groups are primarily undertaken face-to-face but can now also be undertaken online, using appropriate technologies such as email, bulletin boards, online research communities, chat rooms, discussion forums, social media and video conferencing. 32 Using such technologies, data collection can also be synchronous (for example, online discussions in 'real time') or, unlike traditional face-to-face focus groups, asynchronous (for example, online/email discussions in 'non-real time'). While many of the fundamental principles of focus group research are the same, regardless of how they are conducted, a number of subtle nuances are associated with the online medium. 32 Some of which are discussed further in the following sections.

Focus group considerations

Some key considerations associated with face-to-face focus groups are: how many participants are required; should participants within each group know each other (or not) and how many focus groups are needed within a single study? These issues are much debated and there is no definitive answer. However, the number of focus groups required will largely depend on the topic area, the depth and breadth of data needed, the desired level of participation required 29 and the necessity (or not) for data saturation.

The optimum group size is around six to eight participants (excluding researchers) but can work effectively with between three and 14 participants. 3 If the group is too small, it may limit discussion, but if it is too large, it may become disorganised and difficult to manage. It is, however, prudent to over-recruit for a focus group by approximately two to three participants, to allow for potential non-attenders. For many researchers, particularly novice researchers, group size may also be informed by pragmatic considerations, such as the type of study, resources available and moderator experience. 28 Similar size and mix considerations exist for online focus groups. Typically, synchronous online focus groups will have around three to eight participants but, as the discussion does not happen simultaneously, asynchronous groups may have as many as 10–30 participants. 33

The topic area and potential group interaction should guide group composition considerations. Pre-existing groups, where participants know each other (for example, work colleagues) may be easier to recruit, have shared experiences and may enjoy a familiarity, which facilitates discussion and/or the ability to challenge each other courteously. 3 However, if there is a potential power imbalance within the group or if existing group norms and hierarchies may adversely affect the ability of participants to speak freely, then 'stranger groups' (that is, where participants do not already know each other) may be more appropriate. 34 , 35

Focus group management

Face-to-face focus groups should normally be conducted by two researchers; a moderator and an observer. 28 The moderator facilitates group discussion, while the observer typically monitors group dynamics, behaviours, non-verbal cues, seating arrangements and speaking order, which is essential for transcription and analysis. The same principles of informed consent, as discussed in the interview section, also apply to focus groups, regardless of medium. However, the consent process for online discussions will probably be managed somewhat differently. For example, while an appropriate participant information leaflet (and consent form) would still be required, the process is likely to be managed electronically (for example, via email) and would need to specifically address issues relating to technology (for example, anonymity and use, storage and access to online data). 32

The venue in which a face to face focus group is conducted should be of a suitable size, private, quiet, free from distractions and in a collectively convenient location. It should also be conducted at a time appropriate for participants, 28 as this is likely to promote attendance. As with interviews, the same ethical considerations apply (as discussed earlier). However, online focus groups may present additional ethical challenges associated with issues such as informed consent, appropriate access and secure data storage. Further guidance can be found elsewhere. 8 , 32

Before the focus group commences, the researchers should establish rapport with participants, as this will help to put them at ease and result in a more meaningful discussion. Consequently, researchers should introduce themselves, provide further clarity about the study and how the process will work in practice and outline the 'ground rules'. Ground rules are designed to assist, not hinder, group discussion and typically include: 3 , 28 , 29

Discussions within the group are confidential to the group

Only one person can speak at a time

All participants should have sufficient opportunity to contribute

There should be no unnecessary interruptions while someone is speaking

Everyone can be expected to be listened to and their views respected

Challenging contrary opinions is appropriate, but ridiculing is not.

Moderating a focus group requires considered management and good interpersonal skills to help guide the discussion and, where appropriate, keep it sufficiently focused. Avoid, therefore, participating, leading, expressing personal opinions or correcting participants' knowledge 3 , 28 as this may bias the process. A relaxed, interested demeanour will also help participants to feel comfortable and promote candid discourse. Moderators should also prevent the discussion being dominated by any one person, ensure differences of opinions are discussed fairly and, if required, encourage reticent participants to contribute. 3 Asking open questions, reflecting on significant issues, inviting further debate, probing responses accordingly, and seeking further clarification, as and where appropriate, will help to obtain sufficient depth and insight into the topic area.

Moderating online focus groups requires comparable skills, particularly if the discussion is synchronous, as the discussion may be dominated by those who can type proficiently. 36 It is therefore important that sufficient time and respect is accorded to those who may not be able to type as quickly. Asynchronous discussions are usually less problematic in this respect, as interactions are less instant. However, moderating an asynchronous discussion presents additional challenges, particularly if participants are geographically dispersed, as they may be online at different times. Consequently, the moderator will not always be present and the discussion may therefore need to occur over several days, which can be difficult to manage and facilitate and invariably requires considerable flexibility. 32 It is also worth recognising that establishing rapport with participants via online medium is often more challenging than via face-to-face and may therefore require additional time, skills, effort and consideration.

As with research interviews, focus groups should be guided by an appropriate interview schedule, as discussed earlier in the paper. For example, the schedule will usually be informed by the review of the literature and study aims, and will merely provide a topic guide to help inform subsequent discussions. To provide a verbatim account of the discussion, focus groups must be recorded, using an audio-recorder with a good quality multi-directional microphone. While videotaping is possible, some participants may find it obtrusive, 3 which may adversely affect group dynamics. The use (or not) of a video recorder, should therefore be carefully considered.

At the end of the focus group, a few minutes should be spent rounding up and reflecting on the discussion. 28 Depending on the topic area, it is possible that some participants may have revealed deeply personal issues and may therefore require further help and support, such as a constructive debrief or possibly even referral on to a relevant third party. It is also possible that some participants may feel that the discussion did not adequately reflect their views and, consequently, may no longer wish to be associated with the study. 28 Such occurrences are likely to be uncommon, but should they arise, it is important to further discuss any concerns and, if appropriate, offer them the opportunity to withdraw (including any data relating to them) from the study. Immediately after the discussion, researchers should compile notes regarding thoughts and ideas about the focus group, which can assist with data analysis and, if appropriate, any further data collection.

Qualitative research is increasingly being utilised within dental research to explore the experiences, perspectives, motivations and beliefs of participants. The contributions of qualitative research to evidence-based practice are increasingly being recognised, both as standalone research and as part of larger mixed-method studies, including clinical trials. Interviews and focus groups remain commonly used data collection methods in qualitative research, and with the advent of digital technologies, their utilisation continues to evolve. However, digital methods of qualitative data collection present additional methodological, ethical and practical considerations, but also potentially offer considerable flexibility to participants and researchers. Consequently, regardless of format, qualitative methods have significant potential to inform important areas of dental practice, policy and further related research.

Gussy M, Dickson-Swift V, Adams J . A scoping review of qualitative research in peer-reviewed dental publications. Int J Dent Hygiene 2013; 11 : 174–179.

Article   Google Scholar  

Burnard P, Gill P, Stewart K, Treasure E, Chadwick B . Analysing and presenting qualitative data. Br Dent J 2008; 204 : 429–432.

Gill P, Stewart K, Treasure E, Chadwick B . Methods of data collection in qualitative research: interviews and focus groups. Br Dent J 2008; 204 : 291–295.

Gill P, Stewart K, Treasure E, Chadwick B . Conducting qualitative interviews with school children in dental research. Br Dent J 2008; 204 : 371–374.

Stewart K, Gill P, Chadwick B, Treasure E . Qualitative research in dentistry. Br Dent J 2008; 204 : 235–239.

Masood M, Thaliath E, Bower E, Newton J . An appraisal of the quality of published qualitative dental research. Community Dent Oral Epidemiol 2011; 39 : 193–203.

Ellis J, Levine A, Bedos C et al. Refusal of implant supported mandibular overdentures by elderly patients. Gerodontology 2011; 28 : 62–68.

Macfarlane S, Bucknall T . Digital Technologies in Research. In Gerrish K, Lathlean J (editors) The Research Process in Nursing . 7th edition. pp. 71–86. Oxford: Wiley Blackwell; 2015.

Google Scholar  

Lee R, Fielding N, Blank G . Online Research Methods in the Social Sciences: An Editorial Introduction. In Fielding N, Lee R, Blank G (editors) The Sage Handbook of Online Research Methods . pp. 3–16. London: Sage Publications; 2016.

Creswell J . Qualitative inquiry and research design: Choosing among five designs . Thousand Oaks, CA: Sage, 1998.

Guest G, Namey E, Mitchell M . Qualitative research: Defining and designing In Guest G, Namey E, Mitchell M (editors) Collecting Qualitative Data: A Field Manual For Applied Research . pp. 1–40. London: Sage Publications, 2013.

Chapter   Google Scholar  

Pope C, Mays N . Qualitative research: Reaching the parts other methods cannot reach: an introduction to qualitative methods in health and health services research. BMJ 1995; 311 : 42–45.

Giddings L, Grant B . A Trojan Horse for positivism? A critique of mixed methods research. Adv Nurs Sci 2007; 30 : 52–60.

Hammersley M, Atkinson P . Ethnography: Principles in Practice . London: Routledge, 1995.

Oltmann S . Qualitative interviews: A methodological discussion of the interviewer and respondent contexts Forum Qualitative Sozialforschung/Forum: Qualitative Social Research. 2016; 17 : Art. 15.

Patton M . Qualitative Research and Evaluation Methods . Thousand Oaks, CA: Sage, 2002.

Wang M, Vinall-Collier K, Csikar J, Douglas G . A qualitative study of patients' views of techniques to reduce dental anxiety. J Dent 2017; 66 : 45–51.

Lindenmeyer A, Bowyer V, Roscoe J, Dale J, Sutcliffe P . Oral health awareness and care preferences in patients with diabetes: a qualitative study. Fam Pract 2013; 30 : 113–118.

Gallagher J, Clarke W, Wilson N . Understanding the motivation: a qualitative study of dental students' choice of professional career. Eur J Dent Educ 2008; 12 : 89–98.

Tod A . Interviewing. In Gerrish K, Lacey A (editors) The Research Process in Nursing . Oxford: Blackwell Publishing, 2006.

Grey E, Harcourt D, O'Sullivan D, Buchanan H, Kipatrick N . A qualitative study of patients' motivations and expectations for dental implants. Br Dent J 2013; 214 : 10.1038/sj.bdj.2012.1178.

Farmer J, Peressini S, Lawrence H . Exploring the role of the dental hygienist in reducing oral health disparities in Canada: A qualitative study. Int J Dent Hygiene 2017; 10.1111/idh.12276.

McElhinney E, Cheater F, Kidd L . Undertaking qualitative health research in social virtual worlds. J Adv Nurs 2013; 70 : 1267–1275.

Health Research Authority. UK Policy Framework for Health and Social Care Research. Available at https://www.hra.nhs.uk/planning-and-improving-research/policies-standards-legislation/uk-policy-framework-health-social-care-research/ (accessed September 2017).

Baillie J, Gill P, Courtenay P . Knowledge, understanding and experiences of peritonitis among patients, and their families, undertaking peritoneal dialysis: A mixed methods study protocol. J Adv Nurs 2017; 10.1111/jan.13400.

Kvale S . Interviews . Thousand Oaks (CA): Sage, 1996.

Spradley J . The Ethnographic Interview . New York: Holt, Rinehart and Winston, 1979.

Goodman C, Evans C . Focus Groups. In Gerrish K, Lathlean J (editors) The Research Process in Nursing . pp. 401–412. Oxford: Wiley Blackwell, 2015.

Shaha M, Wenzell J, Hill E . Planning and conducting focus group research with nurses. Nurse Res 2011; 18 : 77–87.

Wang G, Gao X, Edward C . Public perception of dental implants: a qualitative study. J Dent 2015; 43 : 798–805.

Bailey E . Contemporary views of dental practitioners' on patient safety. Br Dent J 2015; 219 : 535–540.

Abrams K, Gaiser T . Online Focus Groups. In Field N, Lee R, Blank G (editors) The Sage Handbook of Online Research Methods . pp. 435–450. London: Sage Publications, 2016.

Poynter R . The Handbook of Online and Social Media Research . West Sussex: John Wiley & Sons, 2010.

Kevern J, Webb C . Focus groups as a tool for critical social research in nurse education. Nurse Educ Today 2001; 21 : 323–333.

Kitzinger J, Barbour R . Introduction: The Challenge and Promise of Focus Groups. In Barbour R S K J (editor) Developing Focus Group Research . pp. 1–20. London: Sage Publications, 1999.

Krueger R, Casey M . Focus Groups: A Practical Guide for Applied Research. 4th ed. Thousand Oaks, California: SAGE; 2009.

Download references

Author information

Authors and affiliations.

Senior Lecturer (Adult Nursing), School of Healthcare Sciences, Cardiff University,

Lecturer (Adult Nursing) and RCBC Wales Postdoctoral Research Fellow, School of Healthcare Sciences, Cardiff University,

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to P. Gill .

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Gill, P., Baillie, J. Interviews and focus groups in qualitative research: an update for the digital age. Br Dent J 225 , 668–672 (2018). https://doi.org/10.1038/sj.bdj.2018.815

Download citation

Accepted : 02 July 2018

Published : 05 October 2018

Issue Date : 12 October 2018

DOI : https://doi.org/10.1038/sj.bdj.2018.815

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Translating brand reputation into equity from the stakeholder’s theory: an approach to value creation based on consumer’s perception & interactions.

  • Olukorede Adewole

International Journal of Corporate Social Responsibility (2024)

Perceptions and beliefs of community gatekeepers about genomic risk information in African cleft research

  • Abimbola M. Oladayo
  • Oluwakemi Odukoya
  • Azeez Butali

BMC Public Health (2024)

Assessment of women’s needs, wishes and preferences regarding interprofessional guidance on nutrition in pregnancy – a qualitative study

  • Merle Ebinghaus
  • Caroline Johanna Agricola
  • Birgit-Christiane Zyriax

BMC Pregnancy and Childbirth (2024)

‘Baby mamas’ in Urban Ghana: an exploratory qualitative study on the factors influencing serial fathering among men in Accra, Ghana

  • Rosemond Akpene Hiadzi
  • Jemima Akweley Agyeman
  • Godwin Banafo Akrong

Reproductive Health (2023)

Revolutionising dental technologies: a qualitative study on dental technicians’ perceptions of Artificial intelligence integration

  • Galvin Sim Siang Lin
  • Yook Shiang Ng
  • Kah Hoay Chua

BMC Oral Health (2023)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

number of participants in qualitative research

number of participants in qualitative research

CRO Platform

Test your insights. Run experiments. Win. Or learn. And then win.

number of participants in qualitative research

eCommerce Customer Analytics Platform

number of participants in qualitative research

Acquisition matters. But retention matters more. Understand, monitor & nurture the best customers.

  • Case Studies
  • Ebooks, Tools, Templates
  • Digital Marketing Glossary
  • eCommerce Growth Stories
  • eCommerce Growth Show
  • Help & Technical Documentation

CRO Guide   >  Chapter 3.1

Qualitative Research: Definition, Methodology, Limitation & Examples

Qualitative research is a method focused on understanding human behavior and experiences through non-numerical data. Examples of qualitative research include:

  • One-on-one interviews,
  • Focus groups, Ethnographic research,
  • Case studies,
  • Record keeping,
  • Qualitative observations

In this article, we’ll provide tips and tricks on how to use qualitative research to better understand your audience through real world examples and improve your ROI. We’ll also learn the difference between qualitative and quantitative data.

gathering data

Table of Contents

Marketers often seek to understand their customers deeply. Qualitative research methods such as face-to-face interviews, focus groups, and qualitative observations can provide valuable insights into your products, your market, and your customers’ opinions and motivations. Understanding these nuances can significantly enhance marketing strategies and overall customer satisfaction.

What is Qualitative Research

Qualitative research is a market research method that focuses on obtaining data through open-ended and conversational communication. This method focuses on the “why” rather than the “what” people think about you. Thus, qualitative research seeks to uncover the underlying motivations, attitudes, and beliefs that drive people’s actions. 

Let’s say you have an online shop catering to a general audience. You do a demographic analysis and you find out that most of your customers are male. Naturally, you will want to find out why women are not buying from you. And that’s what qualitative research will help you find out.

In the case of your online shop, qualitative research would involve reaching out to female non-customers through methods such as in-depth interviews or focus groups. These interactions provide a platform for women to express their thoughts, feelings, and concerns regarding your products or brand. Through qualitative analysis, you can uncover valuable insights into factors such as product preferences, user experience, brand perception, and barriers to purchase.

Types of Qualitative Research Methods

Qualitative research methods are designed in a manner that helps reveal the behavior and perception of a target audience regarding a particular topic.

The most frequently used qualitative analysis methods are one-on-one interviews, focus groups, ethnographic research, case study research, record keeping, and qualitative observation.

1. One-on-one interviews

Conducting one-on-one interviews is one of the most common qualitative research methods. One of the advantages of this method is that it provides a great opportunity to gather precise data about what people think and their motivations.

Spending time talking to customers not only helps marketers understand who their clients are, but also helps with customer care: clients love hearing from brands. This strengthens the relationship between a brand and its clients and paves the way for customer testimonials.

  • A company might conduct interviews to understand why a product failed to meet sales expectations.
  • A researcher might use interviews to gather personal stories about experiences with healthcare.

These interviews can be performed face-to-face or on the phone and usually last between half an hour to over two hours. 

When a one-on-one interview is conducted face-to-face, it also gives the marketer the opportunity to read the body language of the respondent and match the responses.

2. Focus groups

Focus groups gather a small number of people to discuss and provide feedback on a particular subject. The ideal size of a focus group is usually between five and eight participants. The size of focus groups should reflect the participants’ familiarity with the topic. For less important topics or when participants have little experience, a group of 10 can be effective. For more critical topics or when participants are more knowledgeable, a smaller group of five to six is preferable for deeper discussions.

The main goal of a focus group is to find answers to the “why”, “what”, and “how” questions. This method is highly effective in exploring people’s feelings and ideas in a social setting, where group dynamics can bring out insights that might not emerge in one-on-one situations.

  • A focus group could be used to test reactions to a new product concept.
  • Marketers might use focus groups to see how different demographic groups react to an advertising campaign.

One advantage that focus groups have is that the marketer doesn’t necessarily have to interact with the group in person. Nowadays focus groups can be sent as online qualitative surveys on various devices.

Focus groups are an expensive option compared to the other qualitative research methods, which is why they are typically used to explain complex processes.

3. Ethnographic research

Ethnographic research is the most in-depth observational method that studies individuals in their naturally occurring environment.

This method aims at understanding the cultures, challenges, motivations, and settings that occur.

  • A study of workplace culture within a tech startup.
  • Observational research in a remote village to understand local traditions.

Ethnographic research requires the marketer to adapt to the target audiences’ environments (a different organization, a different city, or even a remote location), which is why geographical constraints can be an issue while collecting data.

This type of research can last from a few days to a few years. It’s challenging and time-consuming and solely depends on the expertise of the marketer to be able to analyze, observe, and infer the data.

4. Case study research

The case study method has grown into a valuable qualitative research method. This type of research method is usually used in education or social sciences. It involves a comprehensive examination of a single instance or event, providing detailed insights into complex issues in real-life contexts.  

  • Analyzing a single school’s innovative teaching method.
  • A detailed study of a patient’s medical treatment over several years.

Case study research may seem difficult to operate, but it’s actually one of the simplest ways of conducting research as it involves a deep dive and thorough understanding of the data collection methods and inferring the data.

5. Record keeping

Record keeping is similar to going to the library: you go over books or any other reference material to collect relevant data. This method uses already existing reliable documents and similar sources of information as a data source.

  • Historical research using old newspapers and letters.
  • A study on policy changes over the years by examining government records.

This method is useful for constructing a historical context around a research topic or verifying other findings with documented evidence.

6. Qualitative observation

Qualitative observation is a method that uses subjective methodologies to gather systematic information or data. This method deals with the five major sensory organs and their functioning, sight, smell, touch, taste, and hearing.

  • Sight : Observing the way customers visually interact with product displays in a store to understand their browsing behaviors and preferences.
  • Smell : Noting reactions of consumers to different scents in a fragrance shop to study the impact of olfactory elements on product preference.
  • Touch : Watching how individuals interact with different materials in a clothing store to assess the importance of texture in fabric selection.
  • Taste : Evaluating reactions of participants in a taste test to identify flavor profiles that appeal to different demographic groups.
  • Hearing : Documenting responses to changes in background music within a retail environment to determine its effect on shopping behavior and mood.

Below we are also providing real-life examples of qualitative research that demonstrate practical applications across various contexts:

Qualitative Research Real World Examples

Let’s explore some examples of how qualitative research can be applied in different contexts.

1. Online grocery shop with a predominantly male audience

Method used: one-on-one interviews.

Let’s go back to one of the previous examples. You have an online grocery shop. By nature, it addresses a general audience, but after you do a demographic analysis you find out that most of your customers are male.

One good method to determine why women are not buying from you is to hold one-on-one interviews with potential customers in the category.

Interviewing a sample of potential female customers should reveal why they don’t find your store appealing. The reasons could range from not stocking enough products for women to perhaps the store’s emphasis on heavy-duty tools and automotive products, for example. These insights can guide adjustments in inventory and marketing strategies.

2. Software company launching a new product

Method used: focus groups.

Focus groups are great for establishing product-market fit.

Let’s assume you are a software company that wants to launch a new product and you hold a focus group with 12 people. Although getting their feedback regarding users’ experience with the product is a good thing, this sample is too small to define how the entire market will react to your product.

So what you can do instead is holding multiple focus groups in 20 different geographic regions. Each region should be hosting a group of 12 for each market segment; you can even segment your audience based on age. This would be a better way to establish credibility in the feedback you receive.

3. Alan Pushkin’s “God’s Choice: The Total World of a Fundamentalist Christian School”

Method used: ethnographic research.

Moving from a fictional example to a real-life one, let’s analyze Alan Peshkin’s 1986 book “God’s Choice: The Total World of a Fundamentalist Christian School”.

Peshkin studied the culture of Bethany Baptist Academy by interviewing the students, parents, teachers, and members of the community alike, and spending eighteen months observing them to provide a comprehensive and in-depth analysis of Christian schooling as an alternative to public education.

The study highlights the school’s unified purpose, rigorous academic environment, and strong community support while also pointing out its lack of cultural diversity and openness to differing viewpoints. These insights are crucial for understanding how such educational settings operate and what they offer to students.

Even after discovering all this, Peshkin still presented the school in a positive light and stated that public schools have much to learn from such schools.

Peshkin’s in-depth research represents a qualitative study that uses observations and unstructured interviews, without any assumptions or hypotheses. He utilizes descriptive or non-quantifiable data on Bethany Baptist Academy specifically, without attempting to generalize the findings to other Christian schools.

4. Understanding buyers’ trends

Method used: record keeping.

Another way marketers can use quality research is to understand buyers’ trends. To do this, marketers need to look at historical data for both their company and their industry and identify where buyers are purchasing items in higher volumes.

For example, electronics distributors know that the holiday season is a peak market for sales while life insurance agents find that spring and summer wedding months are good seasons for targeting new clients.

5. Determining products/services missing from the market

Conducting your own research isn’t always necessary. If there are significant breakthroughs in your industry, you can use industry data and adapt it to your marketing needs.

The influx of hacking and hijacking of cloud-based information has made Internet security a topic of many industry reports lately. A software company could use these reports to better understand the problems its clients are facing.

As a result, the company can provide solutions prospects already know they need.

Real-time Customer Lifetime Value (CLV) Benchmark Report

See where your business stands compared to 1,000+ e-stores in different industries.

35 reports by industry and business size.

Qualitative Research Approaches

Once the marketer has decided that their research questions will provide data that is qualitative in nature, the next step is to choose the appropriate qualitative approach.

The approach chosen will take into account the purpose of the research, the role of the researcher, the data collected, the method of data analysis , and how the results will be presented. The most common approaches include:

  • Narrative : This method focuses on individual life stories to understand personal experiences and journeys. It examines how people structure their stories and the themes within them to explore human existence. For example, a narrative study might look at cancer survivors to understand their resilience and coping strategies.
  • Phenomenology : attempts to understand or explain life experiences or phenomena; It aims to reveal the depth of human consciousness and perception, such as by studying the daily lives of those with chronic illnesses.
  • Grounded theory : investigates the process, action, or interaction with the goal of developing a theory “grounded” in observations and empirical data. 
  • Ethnography : describes and interprets an ethnic, cultural, or social group;
  • Case study : examines episodic events in a definable framework, develops in-depth analyses of single or multiple cases, and generally explains “how”. An example might be studying a community health program to evaluate its success and impact.

How to Analyze Qualitative Data

Analyzing qualitative data involves interpreting non-numerical data to uncover patterns, themes, and deeper insights. This process is typically more subjective and requires a systematic approach to ensure reliability and validity. 

1. Data Collection

Ensure that your data collection methods (e.g., interviews, focus groups, observations) are well-documented and comprehensive. This step is crucial because the quality and depth of the data collected will significantly influence the analysis.

2. Data Preparation

Once collected, the data needs to be organized. Transcribe audio and video recordings, and gather all notes and documents. Ensure that all data is anonymized to protect participant confidentiality where necessary.

3. Familiarization

Immerse yourself in the data by reading through the materials multiple times. This helps you get a general sense of the information and begin identifying patterns or recurring themes.

Develop a coding system to tag data with labels that summarize and account for each piece of information. Codes can be words, phrases, or acronyms that represent how these segments relate to your research questions.

  • Descriptive Coding : Summarize the primary topic of the data.
  • In Vivo Coding : Use language and terms used by the participants themselves.
  • Process Coding : Use gerunds (“-ing” words) to label the processes at play.
  • Emotion Coding : Identify and record the emotions conveyed or experienced.

5. Thematic Development

Group codes into themes that represent larger patterns in the data. These themes should relate directly to the research questions and form a coherent narrative about the findings.

6. Interpreting the Data

Interpret the data by constructing a logical narrative. This involves piecing together the themes to explain larger insights about the data. Link the results back to your research objectives and existing literature to bolster your interpretations.

7. Validation

Check the reliability and validity of your findings by reviewing if the interpretations are supported by the data. This may involve revisiting the data multiple times or discussing the findings with colleagues or participants for validation.

8. Reporting

Finally, present the findings in a clear and organized manner. Use direct quotes and detailed descriptions to illustrate the themes and insights. The report should communicate the narrative you’ve built from your data, clearly linking your findings to your research questions.

Limitations of qualitative research

The disadvantages of qualitative research are quite unique. The techniques of the data collector and their own unique observations can alter the information in subtle ways. That being said, these are the qualitative research’s limitations:

1. It’s a time-consuming process

The main drawback of qualitative study is that the process is time-consuming. Another problem is that the interpretations are limited. Personal experience and knowledge influence observations and conclusions.

Thus, qualitative research might take several weeks or months. Also, since this process delves into personal interaction for data collection, discussions often tend to deviate from the main issue to be studied.

2. You can’t verify the results of qualitative research

Because qualitative research is open-ended, participants have more control over the content of the data collected. So the marketer is not able to verify the results objectively against the scenarios stated by the respondents. For example, in a focus group discussing a new product, participants might express their feelings about the design and functionality. However, these opinions are influenced by individual tastes and experiences, making it difficult to ascertain a universally applicable conclusion from these discussions.

3. It’s a labor-intensive approach

Qualitative research requires a labor-intensive analysis process such as categorization, recording, etc. Similarly, qualitative research requires well-experienced marketers to obtain the needed data from a group of respondents.

4. It’s difficult to investigate causality

Qualitative research requires thoughtful planning to ensure the obtained results are accurate. There is no way to analyze qualitative data mathematically. This type of research is based more on opinion and judgment rather than results. Because all qualitative studies are unique they are difficult to replicate.

5. Qualitative research is not statistically representative

Because qualitative research is a perspective-based method of research, the responses given are not measured.

Comparisons can be made and this can lead toward duplication, but for the most part, quantitative data is required for circumstances that need statistical representation and that is not part of the qualitative research process.

While doing a qualitative study, it’s important to cross-reference the data obtained with the quantitative data. By continuously surveying prospects and customers marketers can build a stronger database of useful information.

Quantitative vs. Qualitative Research

Qualitative and quantitative research side by side in a table

Image source

Quantitative and qualitative research are two distinct methodologies used in the field of market research, each offering unique insights and approaches to understanding consumer behavior and preferences.

As we already defined, qualitative analysis seeks to explore the deeper meanings, perceptions, and motivations behind human behavior through non-numerical data. On the other hand, quantitative research focuses on collecting and analyzing numerical data to identify patterns, trends, and statistical relationships.  

Let’s explore their key differences: 

Nature of Data:

  • Quantitative research : Involves numerical data that can be measured and analyzed statistically.
  • Qualitative research : Focuses on non-numerical data, such as words, images, and observations, to capture subjective experiences and meanings.

Research Questions:

  • Quantitative research : Typically addresses questions related to “how many,” “how much,” or “to what extent,” aiming to quantify relationships and patterns.
  • Qualitative research: Explores questions related to “why” and “how,” aiming to understand the underlying motivations, beliefs, and perceptions of individuals.

Data Collection Methods:

  • Quantitative research : Relies on structured surveys, experiments, or observations with predefined variables and measures.
  • Qualitative research : Utilizes open-ended interviews, focus groups, participant observations, and textual analysis to gather rich, contextually nuanced data.

Analysis Techniques:

  • Quantitative research: Involves statistical analysis to identify correlations, associations, or differences between variables.
  • Qualitative research: Employs thematic analysis, coding, and interpretation to uncover patterns, themes, and insights within qualitative data.

number of participants in qualitative research

Do Conversion Rate Optimization the Right way.

Explore helps you make the most out of your CRO efforts through advanced A/B testing, surveys, advanced segmentation and optimised customer journeys.

An isometric image of an adobe adobe adobe adobe ad.

If you haven’t subscribed yet to our newsletter, now is your chance!

A man posing happily in front of a vivid purple background for an engaging blog post.

Like what you’re reading?

Join the informed ecommerce crowd.

We will never bug you with irrelevant info.

By clicking the Button, you confirm that you agree with our Terms and Conditions .

Continue your Conversion Rate Optimization Journey

  • Last modified: January 3, 2023
  • Conversion Rate Optimization , User Research

Valentin Radu

Valentin Radu

Omniconvert logo on a black background.

We’re a team of people that want to empower marketers around the world to create marketing campaigns that matter to consumers in a smart way. Meet us at the intersection of creativity, integrity, and development, and let us show you how to optimize your marketing.

Our Software

  • > Book a Demo
  • > Partner Program
  • > Affiliate Program
  • Blog Sitemap
  • Terms and Conditions
  • Privacy & Security
  • Cookies Policy
  • REVEAL Terms and Conditions
  • Open access
  • Published: 18 May 2024

Identifying primary care clinicians’ preferences for, barriers to, and facilitators of information-seeking in clinical practice in Singapore: a qualitative study

  • Mauricette Moling Lee 1 , 2 ,
  • Wern Ee Tang 3 ,
  • Helen Elizabeth Smith 4 &
  • Lorainne Tudor Car 1 , 5  

BMC Primary Care volume  25 , Article number:  172 ( 2024 ) Cite this article

64 Accesses

Metrics details

The growth of medical knowledge and patient care complexity calls for improved clinician access to evidence-based resources. This study aimed to explore the primary care clinicians’ preferences for, barriers to, and facilitators of information-seeking in clinical practice in Singapore.

A convenience sample of ten doctors and ten nurses was recruited. We conducted semi-structured face-to-face in-depth interviews. The interviews were recorded, transcribed verbatim, and analysed using thematic content analysis.

Of the 20 participants, eight doctors and ten nurses worked at government-funded polyclinics and two doctors worked in private practice. Most clinicians sought clinical information daily at the point-of-care. The most searched-for information by clinicians in practice was less common conditions. Clinicians preferred evidence-based resources such as clinical practice guidelines and UpToDate®. Clinical practice guidelines were mostly used when they were updated or based on memory. Clinicians also commonly sought answers from their peers. Furthermore, clinicians frequently use smartphones to access the Google search engine and UpToDate® app. The barriers to accessing clinical information included the lack of time, internet surfing separation of work computers, limited search functions in the organisation’s server, and limited access to medical literature databases. The facilitators of accessing clinical information included convenience, easy access, and trustworthiness of information sources.

Most primary care clinicians in our study sought clinical information at the point-of-care daily and reported increasing use of smartphones for information-seeking. Future research focusing on interventions to improve access to credible clinical information for primary care clinicians at the point-of-care is recommended.

Trial registration

This study has been reviewed by NHG Domain Specific Review Board (NHG DSRB) (the central ethics committee) for ethics approval. NHG DSRB Reference Number: 2018/01355 (31/07/2019).

Peer Review reports

Primary care clinicians provide the bulk of care to patients in primary care settings. In Singapore, there are 23 polyclinics and about 1,800 General Practitioner (GP) clinics with private GPs providing primary care for about 80% of the population [ 1 ]. The primary care clinicians provide primary care services at community polyclinics and private medical clinics around Singapore [ 1 ]. The polyclinics are formed by three healthcare groups – National Healthcare Group, National University Health System, and SingHealth [ 1 ]. These polyclinics served various populations in Singapore's central, northern, north-eastern, western, and eastern parts [ 1 ]. Every day, clinicians make many clinical decisions, ranging from diagnosis and prognosis to treatment and patient management [ 2 , 3 ]. However, to provide consistent high-quality patient care, such clinical judgments must be informed by existing trustworthy medical evidence [ 4 , 5 , 6 ]. To meet their information needs, clinicians seek relevant information from various sources of information [ 3 ]. Searching for and using the information to meet information needs has been described as information-seeking behaviour [ 7 , 8 , 9 ].

Previous research showed that clinicians often raise questions about patient care in their practice [ 10 ]. Half of those questions are left unanswered. Identifying what information primary care clinicians need, how they search for required information and how they adopt it into practice is essential in ensuring safe and high-quality patient care [ 11 , 12 ]. While there are reports of information-seeking behaviour in primary care from other countries [ 2 , 8 , 13 , 14 ], similar reports in Singapore are limited.

Clinicians may consult several sources to support their decisions, including clinical practice guidelines (CPGs), journal articles, peers, and more [ 3 ]. However, there is a wide variation in the adoption of evidence-based practices across healthcare disciplines, which could lead to poorer primary care outcomes [ 8 , 12 , 15 , 16 , 17 , 18 , 19 ]. To mitigate this, a commonly employed approach is the development of CPGs, clinical pathways, or care guides [ 20 ]. They offer a structured, reliable, and consistent approach to healthcare evidence dissemination and reduce unnecessary clinical practice variation [ 21 ]. However, CPGs are costly to develop and update, context-specific, and unevenly adopted across various healthcare systems [ 22 ]. CPG's uptake is affected by diverse factors such as presentation formats, time pressures, reputability, and ownership [ 14 , 23 ]. Conversely, other sources of clinical practice-related information may not be as valid, credible, or current as CPGs.

Increasingly, healthcare professionals worldwide use their smartphones as an important channel for clinical information [ 24 , 25 , 26 , 27 ], using them to access websites, mobile apps or communicate with peers [ 28 ]. The use of electronic resources improves clinicians' knowledge and behaviour as well as patients' outcomes [ 29 ]. However, evidence on how smartphones are used at the point-of-care, particularly for evidence-seeking, is limited. Singapore, with a total population of 5.92 million as of the end of June 2023 [ 30 ], is one of the countries with the highest smartphone usage among its residents, with approximately 5.72 million (97%) users in 2023 [ 31 ]. Correspondingly, smartphones may be an important information-seeking channel among primary care clinicians. However, the increasing cyber threats worldwide may lead to internet surfing separation as a common security measure.

Institutional policies limiting access to computers at the point-of-care deter clinicians from seeking information and disrupt their workflow [ 32 ]. Due to patient data privacy breaches, the Singapore Ministry of Health introduced internet surfing separation as a security measure in July 2018 in all public healthcare institutions in Singapore [ 33 ]. Internet surfing separation stands for the restrictions on internet access and browsing which were enforced in Singapore public healthcare institutions in 2018 due to patient data privacy breaches [ 33 ]. This has limited the internet access of primary care clinicians at the workplace. Since its introduction, the Internet has not been accessible from any of the clinic's desktop computers and has been available through a few work laptops with limited availability to the polyclinic staff. At the time that this research was conducted, primary care clinicians in the public healthcare sector in Singapore did not have access to the internet from their work computers. Clinicians rely on evidence-based information to make informed decisions about patient care [ 4 , 5 , 6 ]. When access to online resources is restricted, clinicians may struggle to receive current and correct information, thus jeopardising patient safety and the quality of care offered [ 11 , 12 ]. Therefore, we sought to understand how primary care clinicians were addressing their clinical information needs when their work computers were not available to access evidence-based resources online. This study aimed to explore the primary care clinicians’ preferences for, barriers to, and facilitators of information-seeking in clinical practice in Singapore.

A qualitative study consisting of semi-structured face-to-face in-depth interviews was used to explore the primary care clinicians’ preferences for, barriers to, and facilitators of information-seeking in clinical practice in Singapore. The interviews were conducted between August and November 2019 at two polyclinics and two private clinics in Singapore.

The study was approved by the institutional ethics committee (NHG DSRB Reference Number: 2018/01355). All participants read the study information sheet before providing written consent. This study followed the Consolidated Criteria for Reporting Qualitative Research guidelines [ 34 ] [see Additional file 1].

Participants and recruitment

We included primary care doctors and registered nurses from the polyclinics and private primary care practices aged ≥ 21 years who were fluent in English. We employed convenience sampling in this study. Prospective participants were recruited from various polyclinics through personal contacts and advertisements. Five potential participants were contacted but did not respond to the invitation, two potential participants declined participation in this study and one potential participant resigned before the commencement of the study and hence did not participate in the study.

Data collection

The interviews were conducted by a female researcher (MML) in designated private meeting rooms or consultation rooms at various polyclinics or the respective consultation rooms of the private practice. MML was provided with sufficient details, resources, and training on qualitative research before the study commencement. Before the start of the interview, the researcher introduced herself, stated the aim of the interview, explained confidentiality, and obtained informed consent and permission to use a digital voice recorder. The interviewees could pause the interviews due to professional responsibilities at any time. MML conducted the interviews using an interview guide based on a review of the relevant literature and team discussions [ 10 ] [see Additional file 2]. The interview topics included the type of questions during clinical encounters, commonly employed sources of clinical information, frequency and timing of information-seeking, satisfaction with existing information sources, use of CPGs, barriers to information-seeking, and reliability of obtained information. All interview sessions lasted not more than 60 minutes with a mean interview time of 25 minutes and were digitally recorded and transcribed. Field notes were taken during the interviews for further analysis. Data saturation, defined as no new themes arising after three consecutive interviews [ 35 ], was achieved after 20 interviews, therefore we stopped recruitment at 20 participants. Participants were compensated with a SGD25 voucher and a meal upon completion of the interview.

Data analysis

The qualitative data were analysed using Burnard’s method, a structured approach for thematic content analysis established in 1991 [ 36 ]. Burnard's method includes fourteen stages for categorising and coding interview transcripts [ 36 ] [see Additional file 4]. Types of questions were analysed using Ely’s classification [ 37 ]. Burnard's method enhances understanding of the information-seeking behaviour patterns found by Ely's approach by doing a comprehensive evaluation. Ely et al. (2000) developed an approach for categorising clinician queries about patient care [ 37 ]. Clinical questions in primary care were divided into several main categories. For example, the three most common categories of questions based on Ely’s approach were "What is the drug of choice for condition x?", "What is the cause of symptom x?" and "What test is indicated in situation x?" [ 37 ]. Ely et al. (2000) framework was used by the study team to gain a better understanding of clinicians' information needs and to identify the types of questions they had about patient care. It was used mainly to facilitate the study team’s discussion. The study team did not adopt the categories. The analysis was done independently and in parallel by two researchers (MML and LTC). First, the researchers familiarised themselves with the transcripts by reading them multiple times. Second, the initial codes were proposed. Third, the themes were derived from the codes. Fourth, the researchers discussed and combined their themes for comparison. Finally, they reached a consensus on the themes and how to define them. Apart from the initial stages of being acquainted with the transcripts and recommended initial codes, to streamline our codes, related codes were consolidated into more comprehensive headings. This process allows us to organise them more effectively under pertinent subthemes. For example, various information sources that were mentioned by the participants such as evidence-based resources, non-evidence-based resources, and colleagues have all been merged into a subtheme titled "popular information sources" [see Additional file 3]. This process was done iteratively through several rounds. The final list of themes and subthemes was created by removing repeated or similar subthemes. Two other study team members independently created a list of headings without using the first study team member's list. Three lists were discussed and improved to increase validity and reduce researcher bias. Finally, we employed abstraction by developing a basic description of the phenomenon under investigation to establish the final subthemes and themes. Tables 1 and 2 illustrate how these stages were conducted.

Table 1 illustrates that the previous "subtheme" for "rare condition" was "most searched information in clinical practice," but it has been revised to "the type of information needs" to include numerous codes such as pharmacology and others following additional discussion with study team members. A third reviewer HES acted as an arbiter. The coding of transcripts was performed using a word processor. A predetermined classification system was not employed since there was insufficient research to inform the clinicians' perceptions of information-seeking behaviour in Singapore. In particular, the dynamic identification of themes from data was facilitated using an inductive approach. Burnard's method was applied inductively to establish categories and abstraction through open coding illustrated in Tables 1 and 2 . No single method of analysis is appropriate for every type of interview data [ 36 ]. Burnard’s method focuses on a systematic approach to thematic content analysis, which can improve qualitative research objectivity and transparency [ 36 ]. As descriptive studies can investigate perceived barriers to and facilitators of adopting new behaviours [ 38 ], a more descriptive set of themes was appropriate for the study's objectives, and it is consistent with Burnard's method [ 36 ].

A total of 20 clinicians were recruited. Eight doctors and 10 nurses were working in the polyclinics. All nurses and three doctors who participated in this study were females. The demographics of the clinicians is represented in Table  3 . Demographics of clinicians ( N  = 20).

Thematic analysis

Three distinct themes were derived from the analysis of the interview data, 1) the choice of information sources, 2) accessing information sources, and 3) the role of evidence in information-seeking [see Additional file 3]. This is represented in Fig.  1 . Themes and subthemes derived from the interviews.

figure 1

Themes and subthemes derived from the interviews

1) The choice of information sources

Is a theme that encompasses different sources clinicians in our study used to seek and gather information. Clinicians' preferred choice of information sources in five subthemes: popular information sources, CPGs as an information source, internet as an information source, peers as an information source and accessing online information using smartphones

Popular information sources

Clinicians mentioned that their first choice point-of-care evidence-based online sources were UpToDate®, an evidence-based resource that helps clinicians make decisions and informs their practice [ 39 ], CPGs and the Monthly Index of Medical Specialties, followed by PubMed (Medline) and continuing medication education sources. A non-evidence-based information source, the Google search engine was commonly mentioned as well. Lastly, clinicians often mentioned consulting their colleagues:

“I will Google, look for images and compare…I tell them that I’m looking because I am not sure, and I want to just confirm…sometimes even show them the photo on my phone, to ensure…what they saw, the rash…might have already disappeared is…what I suspect it is.” Doctor02.
“I commonly I would search…this app that I have on my phone is called UpToDate®, right…because it’s the most easiest…easily accessible source of information…I’ll just type the whole lot into…the Lexicomp component of the UpToDate® and then from there it tells me whether the drugs have interactions, what kind of interactions.” Doctor07.

CPGs as an information source

 Clinicians mentioned that CPGs did not apply to all patients. Doctors described CPGs as evidence-based resources, designed to be safe and most relevant to practice as a baseline reference. Doctors considered CPGs lengthy at times and there was a need to apply clinical discretion when using them. Doctors also mentioned that CPGs focused sometimes on cost-effectiveness instead of the quality of care:

“I think they are useful in summarising the latest evidence and what…is recommended, especially if they are local clinical practice guidelines, then it’s tailored to our own population…And keeping in mind perhaps the cost sensitivities, cost effectiveness” Doctor02.

Nurses said that they saw CPGs as a standard of practice for clinicians and an easy resource to refer to. However, some nurses said that they found CPGs difficult to access and outdated:

“…but it’s not so…easy to access…because you have to…enter certain keywords, and sometimes it’s not that keyword that’s going to churn out all the information you see…like, try a few times…want to make sure that…I’m doing things correctly…following the guidelines…just quickly…log into the intranet and…search for the information.” Nurse01.

If nurses had difficulty accessing CPGs, they said that they tended to seek doctors’ opinion:

“It’s very informative. It’s quite clear, easy to refer to…in certain special cases…not stated in the book, we will still have to seek…doctor’s opinion” Nurse07.

Internet as an information source

Clinicians mentioned that the internet provided access to clinical information for practice. However, clinicians mentioned that it was important to ensure that the information was well-grounded and dependable:

“…some…information might not be…so trustworthy…takes…a little additional filtering process before…I can say this is a reliable source or not…some of the websites…more opinion-based…very high…chance of bias…the reference from that writing…written at the bottom where I can do…cross-checking…I think the credibility…for this…article written is slightly higher.” Doctor01.
“If only you have an internet, you can always show it to the patient also. For example, when I search for some information, I can even help in patient education…for now, I feel it is a bit harder…And then I have to rely on my phone to use the UpToDate®.” Doctor03.

Peers as an information source

Clinicians mentioned approaching peers who were available to seek a second opinion on their clinical questions. They also mentioned that they tended to approach experts:

“…it’s really a case-to-case basis and it depends if the colleagues around…Also it depends on the proximity of the colleague. If the colleague knows a lot but…busy in another room on another level then I might approach next door colleagues instead.” Doctor06. “I think most of time, if we are going to get our information immediately, we’ll call one of our colleagues here…discuss the case…we’ll come to a consensus, what will be the best for our kind of patient…contribute to the informed decision immediately.” Doctor01.

Accessing online information using smartphones

Clinicians mentioned that their smartphones were convenient for accessing information for practice. For instance, accessing the UpToDate® app and Google search engine using smartphones:

“…commonly I would search…this app that I have on my phone is called UpToDate®…because it’s the most easiest…easily accessible source of information…I’ll just type the whole lot into…the Lexicomp component of…UpToDate® and then from there it tells me whether the drugs have interactions.” Doctor07.
“I will go on the internet…if I needed information about…certain medical conditions…Just definitions, just to have an idea of, you know… Correct, pure Google.” Nurse01.

2) Accessing information sources

Is a theme that encompasses different aspects of information-seeking and access by clinicians in our study. Factors influencing clinicians’ utilisation of information sources in five subthemes: type of information needs, the timing and frequency of information needs, the timing and frequency of using CPGs, information-seeking facilitators and information-seeking barriers.

The type of information needs

Clinicians mentioned that they commonly sought information on less common health areas such as unusual skin rashes, rare diseases, paediatrics, women’s health, medications, and at times concerning all clinical areas:

“Drug information…maybe dosing and everything…when we are prescribing for paediatric…we also see female patients who are pregnant…Lactating, and all… contraindicated” Doctor03.
“Other ones that I would search for would be if the patient comes in with very…unusual presentations.” Doctor07.

The timing and frequency of information needs

Clinicians explained that they commonly seek clinical information daily or several times a week. They said that they either seek information at the point-of-care or at home:

“I will look at least weekly once…It’s of my own interest…Not during working times, most of the time…When we are travelling, in MRT…Sometimes at home also.” Nurse10.
“Not so many cases…It’s quite rare, actually…Because most of our cases are quite common…we still can deal with…Yes…Maybe once a few weeks…Once a month…When I have concerns or any doubts…After patient left…yes. Maybe, sometimes…And after the doctors consult.” Nurse05.

The timing and frequency of using CPGs

Clinicians said that they commonly use CPGs daily or when there was a change or update to the CPGs:

“…day to day, because all these guidelines I’m familiar with, it’s in my memory…internally we do have guidelines for certain acute conditions.” Doctor02.

Clinicians discussed convenience, easy access, the trustworthiness of information, having colleagues who are specialists, and being keen to keep up-to-date as the facilitators to seeking clinical information:

Information-seeking facilitators

“I find…clinical practice guidelines quite useful…since it’s on our terminal. I do open that up to look at it…it does give us quite a convenient and no fuss way to be able to access them on our terminal while we are seeking information whether during or even after consults.” Doctor06.
“work instructions…Policies and protocols…Intranet…So I just want to make sure that…I’m doing things correctly, that I’m, you know, following the guidelines. So I’ll just quickly enter, you know, log into the intranet and just search for the information…The information that’s on the intranet has, you know, been validated by an expert, you know…So that’s why I rely heavily on it.” Nurse01.

Information-seeking barriers

Clinicians mentioned that internet surfing separation, the lack of time, limited access to medical literature databases, and limited search function in the organisation’s server were barriers to seeking clinical information:

“The information I know is there…But it’s not so easy to search for…Not user-friendly, not very exhaustive…Sometimes you just have to…trial-and-error…different keywords.” Nurse01.

Additionally, clinicians frequently mentioned using smartphones to access clinical information. Consequently, doctors said that they were worried that using smartphones during a clinical consultation might make them seem unprofessional to patients:

“I need to explain to the patient that…I am using my phone because I don’t have internet access or may appear rude to the patient; I am surfing my phone in the middle of the consult.” Doctor02.

Doctors reported that they were also concerned about their privacy when they showed their smartphones to their patients:

“…sometimes…you don’t want to show your phone to them(patients) also…Because sometimes you may have other notifications.” Doctor05.

3) The role of evidence in information-seeking

Is a theme that explores the role of evidence in clinicians' information-seeking in our study. The value of scientific research for clinicians seeking information in two subthemes: the importance of trustworthy information sources and employing evidence-based information sources.

The importance of trustworthy information sources

Clinicians agreed that peer-reviewed clinical information was reliable. Additionally, doctors expressed trust in clinical information if there were frequent updates of the content:

“…they(UpToDate®) do put…the date of which they have updated the articles…it’s from multiple sources…citations and…management…seems quite sound.” Doctor06.
“The information that’s on the intranet has…been validated by an expert.” Nurse01.

Employing evidence-based information sources

Clinicians mentioned that emphasising the importance of evidence in patient care and building an evidence-based culture in the workplace helps to encourage the use of evidence-based information sources in practice.

“I don’t have any concrete kind of suggestions now but…perhaps find some ways to sustain interest…to remind us that we’re doing this for best of patients.” Doctor06.
“If I have discussions with my peers regarding cases then I will, like, refer back to the…to…the CPG and things like that…I think the conference…or the…forums they are also a very good source of information.” Nurse03.

To our knowledge, this is the first study conducted in Singapore to investigate the primary care clinicians’ preferences for, barriers to, and facilitators of information-seeking in clinical practice. Clinicians’ mostly researched information on conditions such as unusual skin rashes, rare diseases, paediatrics, and women's health. Most clinicians searched clinical information at the point-of-care daily for a variety of reasons, including personal interest, clarification of doubts, or self-improvement. Sources of information included CPGs, online evidence-based resources, the internet, peers, and smartphones. Although CPGs were clinicians' preferred sources of information, they did not refer to them regularly and only did so in memory or when the guidelines were updated. We also found that using smartphones for seeking clinical information was commonly reported among clinicians. The barriers to primary care clinicians’ information-seeking process were the lack of time, internet surfing separation of work computers, limited search function of their organisation’s server, and limited access to medical literature databases. The facilitators to primary care clinicians’ information-seeking process were convenience, ease of access, and the trustworthiness of the information sources.

Like other studies [ 3 , 8 , 20 , 40 , 41 ], we found that the choice of information sources was affected by the trustworthiness and availability of resources. CPGs were preferred among clinicians as they were written by experts or specialists in their field. However, some clinicians felt that CPGs were too lengthy to be used at the point-of-care, outdated, and difficult to locate on their organisation's server. Additionally, clinicians only referred to CPGs recalled from memory or when they were updated. This highlights the importance of providing an alternative evidence-based clinical resource that is succinct and easy to refer to at the point-of-care [ 42 ]. Using medical apps for the provision of point-of-care summaries may mitigate the challenges of using CPGs for clinical information. Correspondingly, clinicians in the polyclinics commonly referred to the UpToDate® app provided by their organisation as a point-of-care resource they could use on their smartphones. Evidence-based point-of-care resources are commonly presented in key point summaries, follow formal categorisation of medical conditions, and provide references [ 43 ]. Limited research has shown that it was beneficial to integrate UpToDate® searches into daily clinical practice [ 42 ]. Additionally, the American Accreditation Commission International's @TRUST programme is one framework designed to encourage trustworthy online content. It is an invaluable resource for both individuals looking for health information online and organisations attempting to deliver trustworthy content [ 44 ]. However, continual efforts are required to encourage its use and ensure that individuals have access to accurate and reliable health information online. Therefore, future studies should investigate the quality of existing medical apps in providing point-of-care summaries and the effects of their use in the primary care setting.

We also found that clinicians were seeking clinical information on their smartphones. This is not surprising as Singapore’s public healthcare institutions enforce internet surfing separation on work computers. Furthermore, with the high penetration of smartphones in Singapore [ 45 ], these devices became the next best alternative for clinicians to seek online clinical information. Clinicians in the polyclinics frequently cited using UpToDate® app and the Google search engine on their smartphones. Similar to another study [ 46 ], we found that doctors often used Google images on their smartphones to identify less common rashes. Additionally, our study found that clinicians use Google images to educate patients. However, clinicians in the polyclinic reported privacy and professionalism concerns as barriers to using smartphones for clinical consultations. These findings were consistent with a systematic review assessing the challenges and opportunities of using mobile devices by healthcare professionals [ 47 ]. Despite the internet surfing separation in public healthcare institutions in Singapore and the availability various information sources, we found similar barriers to clinicians seeking clinical information with other studies [ 3 , 20 , 48 ]. Future research may focus on addressing specific barriers to using various mobile devices by primary care clinicians at the point-of-care.

Finally, smartphones may be an important information-seeking channel for healthcare professionals, and the hospital or government may be forced to establish legislation to protect healthcare professionals who use smartphones in clinical practice. Compliance with legislation governing smartphone use at work may be examined during the evaluation process for healthcare professionals. Guidelines on smartphone use among healthcare professionals can be tailored to individual conditions, such as patients' permission to share medically sensitive information via text. As a result, guidelines could be based on best practice claims and common actionable statements. Additionally, this study suggests that clinicians have, for the most part, been left to navigate information access on their responsibility, which may not be the most effective. Developing a more robust culture of evidence-based medicine within the organisation is essential and ought to be explicitly promoted moving forward. It could be beneficial for clinicians to receive organised training on effective information-seeking strategies and resources.

Our study has several strengths and limitations. Our strength is that we employed an in-depth interview approach and an open-ended style of questioning. The interactive nature of our interviews provided richer context and room for free responses from the interviewees. We were then able to critically scrutinise the conversations and provide insights that were helpful in the final analysis of themes.

There are several limitations. Firstly, we did not explore the influence of gender and age in the participants’ information-seeking behaviour, which has been demonstrated in other research in this area [ 14 ]. Secondly, the study was limited by environmental factors in the workplace, such as internet and information access. Finally, there may be possible social desirability bias, whereby the participants may have presented responses that were more socially appropriate than their actual thoughts on the issues explored during the interviews.

We found that clinicians frequently sought answers to clinical queries arising from patient care. However, the choice of information sources was influenced by the trustworthiness and availability of the resources. Clinicians in the polyclinic commonly reported using their smartphones for practice. Using UpToDate® app and Google search engine was commonly cited as their preferred clinical information sources due to its convenience and accessibility. While our findings may have been reported in other contexts, there are significant and novel elements when compared to healthcare around the world. For example, the implementation of internet surfing separation in public healthcare institutions raises concerns regarding clinicians' usage of smartphones, as well as their privacy and professionalism. This may lead us to examine the need for some regulation and training on the use of smartphones among clinicians, as well as the necessity to investigate this further from the patient's perspective. Future studies to improve access to evidence-based clinical information sources other than CPGs should be explored to address the information needs of primary care clinicians. Studies examining trustworthiness and effectiveness of using app-based point-of-care information summaries and exploring the impact of using mobile devices for information-seeking by clinicians at the point-of-care will also be useful to address the information-seeking needs of primary care clinicians. Furthermore, Large Language Model (LLM)-based artificial intelligence (AI) systems, such as ChatGPT, are increasingly being developed and used. They are used in various disciplines, including healthcare. Some, such as AMIE (Articulate Medical Intelligence Explorer) and Pathways Language Model (Med-PaLM 2), have been developed specifically for healthcare [ 49 , 50 , 51 ]. More research into the usage of AI among clinicians is needed to assure trust, dependability, and ethical conduct.

Availability of data and materials

The datasets generated and/or analysed during the current study are not publicly available due the fact that all data obtained during the course of this study is strictly confidential and will be kept by the study team at the end of the study for at least 6 years and disposed of according to the Personal Data Protection Act in Singapore. Data are however available from Associate Professor Tang Wern Ee (co-author) upon reasonable request and with permission of the ethics committee of National Healthcare Group Domain Specific Review Board (the central ethics committee).

Abbreviations

General Practitioner

Clinical practice guidelines

Ministry of Health. Primary healthcare services Singapore 2022 [updated 31/05/2022]. Available from: https://www.moh.gov.sg/home/our-healthcare-system/healthcare-services-and-facilities/primary-healthcare-services .

González-González AI, Dawes M, Sánchez-Mateos J, Riesgo-Fuertes R, Escortell-Mayor E, Sanz-Cuesta T, et al. Information needs and information-seeking behavior of primary care physicians. The Annals of Family Medicine. 2007;5(4):345–52.

Article   PubMed   Google Scholar  

Daei A, Soleymani MR, Ashrafi-rizi H, Zargham-Boroujeni A, Kelishadi R. Clinical information seeking behavior of physicians: A systematic review. Int J Med Informatics. 2020;139:104144.

Article   Google Scholar  

Amiel JM, Andriole DA, Biskobing DM, Brown DR, Cutrer WB, Emery MT, et al. Revisiting the core entrustable professional activities for entering residency. Acad Med. 2021;96(7S):S14–21.

College of family physicians singapore. Fellowship Programme (FCFPS) Singapore2022 [updated 2022]. Available from: https://www.cfps.org.sg/programmes/fellowship-programme-fcfps/ .

American Library Association. Information Literacy Competency Standards for Nursing Unites States of America2013 Available from: https://www.ala.org/acrl/standards/nursing .

Braun L, Wiesman F, den Herik Van H, Hasman A. Avoiding literature overload in the medical domain. Stud Health Technol Inform. 2006;124:497–502.

Clarke MA, Belden JL, Koopman RJ, Steege LM, Moore JL, Canfield SM, et al. Information needs and information-seeking behaviour analysis of primary care physicians and nurses: a literature review. Health Info Libr J. 2013;30(3):178–90.

Ely JW, Burch RJ, Vinson DC. The information needs of family physicians: case-specific clinical questions. J Fam Pract. 1992;35(3):265–9.

CAS   PubMed   Google Scholar  

Del Fiol G, Workman TE, Gorman PN. Clinical questions raised by clinicians at the point of care: a systematic review. JAMA Intern Med. 2014;174(5):710–8.

AI-Dousari E. Information Needs and Information Seeking Behaviour of Doctors in Kuwait Government Hospitals: An Exploratory Study: Loughborough University; 2009.

Young JM, Ward JE. Evidence-based medicine in general practice: beliefs and barriers among Australian GPs. J Eval Clin Pract. 2001;7(2):201–10.

Article   CAS   PubMed   Google Scholar  

Ellsworth MA, Homan JM, Cimino JJ, Peters SG, Pickering BW, Herasevich V. Point-of-care knowledge-based resource needs of clinicians: a survey from a large academic medical center. Appl Clin Inform. 2015;6(2):305–17.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Le JV, Pedersen LB, Riisgaard H, Lykkegaard J, Nexoe J, Lemmergaard J, et al. Variation in general practitioners’ information-seeking behaviour - a cross-sectional study on the influence of gender, age and practice form. Scand J Prim Health Care. 2016;34(4):327–35.

Article   PubMed   PubMed Central   Google Scholar  

Bruin-Huisman L, Abu-Hanna A, van Weert H, Beers E. Potentially inappropriate prescribing to older patients in primary care in the Netherlands: a retrospective longitudinal study. Age Ageing. 2017;46(4):614–9.

PubMed   Google Scholar  

Cahir C, Bennett K, Teljeur C, Fahey T. Potentially inappropriate prescribing and adverse health outcomes in community dwelling older patients. Br J Clin Pharmacol. 2014;77(1):201–10.

Davies K. The information-seeking behaviour of doctors: a review of the evidence. Health Info Libr J. 2007;24(2):78–94.

Gill P, Dowell AC, Neal RD, Smith N, Heywood P, Wilson AE. Evidence based general practice: a retrospective study of interventions in one training practice. BMJ. 1996;312(7034):819–21.

Salisbury C, Bosanquet N, Wilkinson E, Bosanquet A, Hasler J. The implementation of evidence-based medicine in general practice prescribing. Br J Gen Pract. 1998;48(437):1849–52.

CAS   PubMed   PubMed Central   Google Scholar  

Aakre CA, Maggio LA, Fiol GD, Cook DA. Barriers and facilitators to clinical information seeking: a systematic review. J Am Med Inform Assoc. 2019;26(10):1129–40.

Scott SD, Grimshaw J, Klassen TP, Nettel-Aguirre A, Johnson DW. Understanding implementation processes of clinical pathways and clinical practice guidelines in pediatric contexts: a study protocol. Implement Sci. 2011;6(1):133.

O’Brien JA, Jacobs LM Jr, Pierce D. Clinical practice guidelines and the cost of care: a growing alliance. Int J Technol Assess Health Care. 2000;16(04):1077–91.

Langley C, Faulkner A, Watkins C, Gray S, Harvey I. Use of guidelines in primary care–practitioners’ perspectives. Fam Pract. 1998;15(2):105–11.

Al-Ghamdi S. Popularity and impact of using smart devices in medicine: experiences in Saudi Arabia. BMC Public Health. 2018;18(1):531.

Ozdalga E, Ozdalga A, Ahuja N. The smartphone in medicine: a review of current and potential use among physicians and students. J Med Internet Res. 2012;14(5):e128.

Hedhli A, Nsir S, Ouahchi Y, Mjid M, Toujani S, Dhahri B. Contribution of mobile applications to learning and medical practice. Tunis Med. 2021;99(12):1134–40.

PubMed   PubMed Central   Google Scholar  

Liu Y, Ren W, Qiu Y, Liu J, Yin P, Ren J. The Use of Mobile Phone and Medical Apps among General Practitioners in Hangzhou City, Eastern China. JMIR mHealth uHealth. 2016;4(2):e64.

Ventola CL. Mobile devices and apps for health care professionals: uses and benefits. P T. 2014;39(5):356–64.

Gagnon MP, Pluye P, Desmartis M, Car J, Pagliari C, Labrecque M, et al. A systematic review of interventions promoting clinical information retrieval technology (CIRT) adoption by healthcare professionals. Int J Med Informatics. 2010;79(10):669–80.

Division NPaT. Population in Brief 2023: Key Trends 2023 [updated 29 Sep 2023]. Available from: https://www.population.gov.sg/media-centre/articles/population-in-brief-2023-key-trends/#:~:text=Overall%2C%20Singapore's%20total%20population%20stood,5.0%25%20increase%20from%20June%202022 .

Department SR. Number of smartphone users in Singapore from 2019 to 2028 2023 [updated 12 Sep 2023]. Available from: https://www.statista.com/statistics/494598/smartphone-users-in-singapore/#:~:text=In%202022%2C%20the%20number%20of,over%206.16%20million%20by%202028 .

Maggio LA, Aakre CA, Del Fiol G, Shellum J, Cook DA. Impact of electronic knowledge resources on clinical and learning outcomes: systematic review and meta-analysis. J Med Internet Res. 2019;21(7):e13315.

Health Mo. Temporary internet surfacing separation implemented at all public healthcare clusters 2018 [updated 07/11/2022]. Available from: https://www.moh.gov.sg/news-highlights/details/temporary-internet-surfacing-separation-implemented-at-all-public-healthcare-clusters .

Booth A, Hannes K, Harden A, Noyes J, Harris J, Tong A. COREQ (Consolidated Criteria for Reporting Qualitative Studies). Guidelines for Reporting Health Research: A User's Manual2014. p. 214–26.

Saunders B, Sim J, Kingstone T, Baker S, Waterfield J, Bartlam B, et al. Saturation in qualitative research: exploring its conceptualization and operationalization. Qual Quant. 2018;52(4):1893–907.

Burnard P. A method of analysing interview transcripts in qualitative research. Nurse Educ Today. 1991;11(6):461–6.

Ely JW, Osheroff JA, Gorman PN, Ebell MH, Chambliss ML, Pifer EA, et al. A taxonomy of generic clinical questions: classification study. BMJ. 2000;321(7258):429–32.

Korstjens I, Moser A. Series: Practical guidance to qualitative research. Part 2: Context, research questions and designs. Eur J Gen Pract. 2017;23(1):274–9.

Wolters Kluwer. UpToDate: Industry-leading clinical decision support 2023 Available from: https://www.wolterskluwer.com/en/solutions/uptodate .

Dawes M, Sampson U. Knowledge management in clinical practice: a systematic review of information seeking behavior in physicians. Int J Med Informatics. 2003;71(1):9–15.

Correa VC, Lugo-Agudelo LH, Aguirre-Acevedo DC, Contreras JAP, Borrero AMP, Patiño-Lugo DF, et al. Individual, health system, and contextual barriers and facilitators for the implementation of clinical practice guidelines: a systematic metareview. Health Res Policy Syst. 2020;18(1):74.

Low S, Lim T. Utility of the electronic information resource UpToDate for clinical decision-making at bedside rounds. Singapore Med J. 2012;53(2):116–20.

Campbell JM, Umapathysivam K, Xue Y, Lockwood C. Evidence-Based Practice Point-of-Care Resources: A Quantitative Evaluation of Quality, Rigor, and Content. Worldviews Evid Based Nurs. 2015;12(6):313–27.

American Accreditation Commission International. @TRUST Certificate 2024 [updated 2024]. Available from: https://aacihealthcare.com/certificates/c173-2022-trust-usa/ .

Statista Research Department. Smartphone market in Singapore-Statistics and facts 2022 [updated 30/08/2022]. Available from: https://www.statista.com/topics/5842/smartphones-in-singapore/#dossierKeyfigures .

Cook DA, Sorensen KJ, Hersh W, Berger RA, Wilkinson JM. Features of effective medical knowledge resources to support point of care learning: a focus group study. PLoS ONE. 2013;8(11):e80318.

Gagnon M-P, Ngangue P, Payne-Gagnon J, Desmartis M. m-Health adoption by healthcare professionals: a systematic review. J Am Med Inform Assoc. 2015;23(1):212–20.

Brassil E, Gunn B, Shenoy AM, Blanchard R. Unanswered clinical questions: a survey of specialists and primary care providers. J Med Libr Assoc. 2017;105(1):4–11.

Thirunavukarasu AJ, Ting DSJ, Elangovan K, Gutierrez L, Tan TF, Ting DSW. Large language models in medicine. Nat Med. 2023;29(8):1930–40.

Tu T, Palepu A, Schaekermann M, Saab K, Freyberg J, Tanno R, et al. Towards conversational diagnostic ai. arXiv preprint arXiv:240105654. 2024.

Li J, Dada A, Puladi B, Kleesiek J, Egger J. ChatGPT in healthcare: A taxonomy and systematic review. Comput Methods Programs Biomed. 2024;245:108013.

Download references

Acknowledgements

Not applicable.

This study is funded by Seedcorn Grant Centre for Primary Health Care Research and Innovation, a joint Lee Kong Chian School of Medicine, and the National Healthcare Group Polyclinics Initiative.

Author information

Authors and affiliations.

Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Novena Campus Clinical Sciences Building 11 Mandalay Road, Singapore, 308232, Singapore

Mauricette Moling Lee & Lorainne Tudor Car

Singapore Institute of Technology, 10 Dover Drive, Singapore, 138683, Singapore

Mauricette Moling Lee

Clinical Research Unit, National Health Group Polyclinics (HQ), 3 Fusionopolis Link, Nexus @ One-North, Singapore, 138543, Singapore

Wern Ee Tang

Family Medicine and Primary Care, Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Novena Campus Clinical Sciences, Building 11 Mandalay Road, Singapore, 308232, Singapore

Helen Elizabeth Smith

Department of Primary Care and Public Health, School of Public Health, Imperial College London, London, UK

Lorainne Tudor Car

You can also search for this author in PubMed   Google Scholar

Contributions

Lorainne Tudor Car conceived the idea for this study. Tang Wern Ee contributed to the design of the work and the acquisition of the data. Mauricette Lee collected the data, analysed it and wrote the manuscript with support from Tang Wern Ee, Helen Smith, and Lorainne Tudor Car. Lorainne Tudor Car and Tang Wern Ee supervised the project.

Corresponding author

Correspondence to Lorainne Tudor Car .

Ethics declarations

Ethics approval and consent to participate.

This study was approved by the National Healthcare Group Domain Specific Review Board (the central ethics committee). National Healthcare Group Domain Specific Review Board Reference Number: 2018/01355. Informed consent was obtained from all participants. All methods were carried out in accordance with relevant guidelines and regulations, in accordance with the Declaration of Helsinki.

Consent for publication

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., supplementary material 2., supplementary material 3., supplementary material 4., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Lee, M.M., Tang, W.E., Smith, H.E. et al. Identifying primary care clinicians’ preferences for, barriers to, and facilitators of information-seeking in clinical practice in Singapore: a qualitative study. BMC Prim. Care 25 , 172 (2024). https://doi.org/10.1186/s12875-024-02429-x

Download citation

Received : 22 December 2022

Accepted : 12 May 2024

Published : 18 May 2024

DOI : https://doi.org/10.1186/s12875-024-02429-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Evidence-based medicine
  • Information-seeking behaviour

BMC Primary Care

ISSN: 2731-4553

number of participants in qualitative research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • BMC Med Res Methodol

Logo of bmcmrm

Characterising and justifying sample size sufficiency in interview-based studies: systematic analysis of qualitative health research over a 15-year period

Konstantina vasileiou.

1 Department of Psychology, University of Bath, Building 10 West, Claverton Down, Bath, BA2 7AY UK

Julie Barnett

Susan thorpe.

2 School of Psychology, Newcastle University, Ridley Building 1, Queen Victoria Road, Newcastle upon Tyne, NE1 7RU UK

Terry Young

3 Department of Computer Science, Brunel University London, Wilfred Brown Building 108, Uxbridge, UB8 3PH UK

Associated Data

Supporting data can be accessed in the original publications. Additional File 2 lists all eligible studies that were included in the present analysis.

Choosing a suitable sample size in qualitative research is an area of conceptual debate and practical uncertainty. That sample size principles, guidelines and tools have been developed to enable researchers to set, and justify the acceptability of, their sample size is an indication that the issue constitutes an important marker of the quality of qualitative research. Nevertheless, research shows that sample size sufficiency reporting is often poor, if not absent, across a range of disciplinary fields.

A systematic analysis of single-interview-per-participant designs within three health-related journals from the disciplines of psychology, sociology and medicine, over a 15-year period, was conducted to examine whether and how sample sizes were justified and how sample size was characterised and discussed by authors. Data pertinent to sample size were extracted and analysed using qualitative and quantitative analytic techniques.

Our findings demonstrate that provision of sample size justifications in qualitative health research is limited; is not contingent on the number of interviews; and relates to the journal of publication. Defence of sample size was most frequently supported across all three journals with reference to the principle of saturation and to pragmatic considerations. Qualitative sample sizes were predominantly – and often without justification – characterised as insufficient (i.e., ‘small’) and discussed in the context of study limitations. Sample size insufficiency was seen to threaten the validity and generalizability of studies’ results, with the latter being frequently conceived in nomothetic terms.

Conclusions

We recommend, firstly, that qualitative health researchers be more transparent about evaluations of their sample size sufficiency, situating these within broader and more encompassing assessments of data adequacy . Secondly, we invite researchers critically to consider how saturation parameters found in prior methodological studies and sample size community norms might best inform, and apply to, their own project and encourage that data adequacy is best appraised with reference to features that are intrinsic to the study at hand. Finally, those reviewing papers have a vital role in supporting and encouraging transparent study-specific reporting.

Electronic supplementary material

The online version of this article (10.1186/s12874-018-0594-7) contains supplementary material, which is available to authorized users.

Sample adequacy in qualitative inquiry pertains to the appropriateness of the sample composition and size . It is an important consideration in evaluations of the quality and trustworthiness of much qualitative research [ 1 ] and is implicated – particularly for research that is situated within a post-positivist tradition and retains a degree of commitment to realist ontological premises – in appraisals of validity and generalizability [ 2 – 5 ].

Samples in qualitative research tend to be small in order to support the depth of case-oriented analysis that is fundamental to this mode of inquiry [ 5 ]. Additionally, qualitative samples are purposive, that is, selected by virtue of their capacity to provide richly-textured information, relevant to the phenomenon under investigation. As a result, purposive sampling [ 6 , 7 ] – as opposed to probability sampling employed in quantitative research – selects ‘information-rich’ cases [ 8 ]. Indeed, recent research demonstrates the greater efficiency of purposive sampling compared to random sampling in qualitative studies [ 9 ], supporting related assertions long put forward by qualitative methodologists.

Sample size in qualitative research has been the subject of enduring discussions [ 4 , 10 , 11 ]. Whilst the quantitative research community has established relatively straightforward statistics-based rules to set sample sizes precisely, the intricacies of qualitative sample size determination and assessment arise from the methodological, theoretical, epistemological, and ideological pluralism that characterises qualitative inquiry (for a discussion focused on the discipline of psychology see [ 12 ]). This mitigates against clear-cut guidelines, invariably applied. Despite these challenges, various conceptual developments have sought to address this issue, with guidance and principles [ 4 , 10 , 11 , 13 – 20 ], and more recently, an evidence-based approach to sample size determination seeks to ground the discussion empirically [ 21 – 35 ].

Focusing on single-interview-per-participant qualitative designs, the present study aims to further contribute to the dialogue of sample size in qualitative research by offering empirical evidence around justification practices associated with sample size. We next review the existing conceptual and empirical literature on sample size determination.

Sample size in qualitative research: Conceptual developments and empirical investigations

Qualitative research experts argue that there is no straightforward answer to the question of ‘how many’ and that sample size is contingent on a number of factors relating to epistemological, methodological and practical issues [ 36 ]. Sandelowski [ 4 ] recommends that qualitative sample sizes are large enough to allow the unfolding of a ‘new and richly textured understanding’ of the phenomenon under study, but small enough so that the ‘deep, case-oriented analysis’ (p. 183) of qualitative data is not precluded. Morse [ 11 ] posits that the more useable data are collected from each person, the fewer participants are needed. She invites researchers to take into account parameters, such as the scope of study, the nature of topic (i.e. complexity, accessibility), the quality of data, and the study design. Indeed, the level of structure of questions in qualitative interviewing has been found to influence the richness of data generated [ 37 ], and so, requires attention; empirical research shows that open questions, which are asked later on in the interview, tend to produce richer data [ 37 ].

Beyond such guidance, specific numerical recommendations have also been proffered, often based on experts’ experience of qualitative research. For example, Green and Thorogood [ 38 ] maintain that the experience of most qualitative researchers conducting an interview-based study with a fairly specific research question is that little new information is generated after interviewing 20 people or so belonging to one analytically relevant participant ‘category’ (pp. 102–104). Ritchie et al. [ 39 ] suggest that studies employing individual interviews conduct no more than 50 interviews so that researchers are able to manage the complexity of the analytic task. Similarly, Britten [ 40 ] notes that large interview studies will often comprise of 50 to 60 people. Experts have also offered numerical guidelines tailored to different theoretical and methodological traditions and specific research approaches, e.g. grounded theory, phenomenology [ 11 , 41 ]. More recently, a quantitative tool was proposed [ 42 ] to support a priori sample size determination based on estimates of the prevalence of themes in the population. Nevertheless, this more formulaic approach raised criticisms relating to assumptions about the conceptual [ 43 ] and ontological status of ‘themes’ [ 44 ] and the linearity ascribed to the processes of sampling, data collection and data analysis [ 45 ].

In terms of principles, Lincoln and Guba [ 17 ] proposed that sample size determination be guided by the criterion of informational redundancy , that is, sampling can be terminated when no new information is elicited by sampling more units. Following the logic of informational comprehensiveness Malterud et al. [ 18 ] introduced the concept of information power as a pragmatic guiding principle, suggesting that the more information power the sample provides, the smaller the sample size needs to be, and vice versa.

Undoubtedly, the most widely used principle for determining sample size and evaluating its sufficiency is that of saturation . The notion of saturation originates in grounded theory [ 15 ] – a qualitative methodological approach explicitly concerned with empirically-derived theory development – and is inextricably linked to theoretical sampling. Theoretical sampling describes an iterative process of data collection, data analysis and theory development whereby data collection is governed by emerging theory rather than predefined characteristics of the population. Grounded theory saturation (often called theoretical saturation) concerns the theoretical categories – as opposed to data – that are being developed and becomes evident when ‘gathering fresh data no longer sparks new theoretical insights, nor reveals new properties of your core theoretical categories’ [ 46 p. 113]. Saturation in grounded theory, therefore, does not equate to the more common focus on data repetition and moves beyond a singular focus on sample size as the justification of sampling adequacy [ 46 , 47 ]. Sample size in grounded theory cannot be determined a priori as it is contingent on the evolving theoretical categories.

Saturation – often under the terms of ‘data’ or ‘thematic’ saturation – has diffused into several qualitative communities beyond its origins in grounded theory. Alongside the expansion of its meaning, being variously equated with ‘no new data’, ‘no new themes’, and ‘no new codes’, saturation has emerged as the ‘gold standard’ in qualitative inquiry [ 2 , 26 ]. Nevertheless, and as Morse [ 48 ] asserts, whilst saturation is the most frequently invoked ‘guarantee of qualitative rigor’, ‘it is the one we know least about’ (p. 587). Certainly researchers caution that saturation is less applicable to, or appropriate for, particular types of qualitative research (e.g. conversation analysis, [ 49 ]; phenomenological research, [ 50 ]) whilst others reject the concept altogether [ 19 , 51 ].

Methodological studies in this area aim to provide guidance about saturation and develop a practical application of processes that ‘operationalise’ and evidence saturation. Guest, Bunce, and Johnson [ 26 ] analysed 60 interviews and found that saturation of themes was reached by the twelfth interview. They noted that their sample was relatively homogeneous, their research aims focused, so studies of more heterogeneous samples and with a broader scope would be likely to need a larger size to achieve saturation. Extending the enquiry to multi-site, cross-cultural research, Hagaman and Wutich [ 28 ] showed that sample sizes of 20 to 40 interviews were required to achieve data saturation of meta-themes that cut across research sites. In a theory-driven content analysis, Francis et al. [ 25 ] reached data saturation at the 17th interview for all their pre-determined theoretical constructs. The authors further proposed two main principles upon which specification of saturation be based: (a) researchers should a priori specify an initial analysis sample (e.g. 10 interviews) which will be used for the first round of analysis and (b) a stopping criterion , that is, a number of interviews (e.g. 3) that needs to be further conducted, the analysis of which will not yield any new themes or ideas. For greater transparency, Francis et al. [ 25 ] recommend that researchers present cumulative frequency graphs supporting their judgment that saturation was achieved. A comparative method for themes saturation (CoMeTS) has also been suggested [ 23 ] whereby the findings of each new interview are compared with those that have already emerged and if it does not yield any new theme, the ‘saturated terrain’ is assumed to have been established. Because the order in which interviews are analysed can influence saturation thresholds depending on the richness of the data, Constantinou et al. [ 23 ] recommend reordering and re-analysing interviews to confirm saturation. Hennink, Kaiser and Marconi’s [ 29 ] methodological study sheds further light on the problem of specifying and demonstrating saturation. Their analysis of interview data showed that code saturation (i.e. the point at which no additional issues are identified) was achieved at 9 interviews, but meaning saturation (i.e. the point at which no further dimensions, nuances, or insights of issues are identified) required 16–24 interviews. Although breadth can be achieved relatively soon, especially for high-prevalence and concrete codes, depth requires additional data, especially for codes of a more conceptual nature.

Critiquing the concept of saturation, Nelson [ 19 ] proposes five conceptual depth criteria in grounded theory projects to assess the robustness of the developing theory: (a) theoretical concepts should be supported by a wide range of evidence drawn from the data; (b) be demonstrably part of a network of inter-connected concepts; (c) demonstrate subtlety; (d) resonate with existing literature; and (e) can be successfully submitted to tests of external validity.

Other work has sought to examine practices of sample size reporting and sufficiency assessment across a range of disciplinary fields and research domains, from nutrition [ 34 ] and health education [ 32 ], to education and the health sciences [ 22 , 27 ], information systems [ 30 ], organisation and workplace studies [ 33 ], human computer interaction [ 21 ], and accounting studies [ 24 ]. Others investigated PhD qualitative studies [ 31 ] and grounded theory studies [ 35 ]. Incomplete and imprecise sample size reporting is commonly pinpointed by these investigations whilst assessment and justifications of sample size sufficiency are even more sporadic.

Sobal [ 34 ] examined the sample size of qualitative studies published in the Journal of Nutrition Education over a period of 30 years. Studies that employed individual interviews ( n  = 30) had an average sample size of 45 individuals and none of these explicitly reported whether their sample size sought and/or attained saturation. A minority of articles discussed how sample-related limitations (with the latter most often concerning the type of sample, rather than the size) limited generalizability. A further systematic analysis [ 32 ] of health education research over 20 years demonstrated that interview-based studies averaged 104 participants (range 2 to 720 interviewees). However, 40% did not report the number of participants. An examination of 83 qualitative interview studies in leading information systems journals [ 30 ] indicated little defence of sample sizes on the basis of recommendations by qualitative methodologists, prior relevant work, or the criterion of saturation. Rather, sample size seemed to correlate with factors such as the journal of publication or the region of study (US vs Europe vs Asia). These results led the authors to call for more rigor in determining and reporting sample size in qualitative information systems research and to recommend optimal sample size ranges for grounded theory (i.e. 20–30 interviews) and single case (i.e. 15–30 interviews) projects.

Similarly, fewer than 10% of articles in organisation and workplace studies provided a sample size justification relating to existing recommendations by methodologists, prior relevant work, or saturation [ 33 ], whilst only 17% of focus groups studies in health-related journals provided an explanation of sample size (i.e. number of focus groups), with saturation being the most frequently invoked argument, followed by published sample size recommendations and practical reasons [ 22 ]. The notion of saturation was also invoked by 11 out of the 51 most highly cited studies that Guetterman [ 27 ] reviewed in the fields of education and health sciences, of which six were grounded theory studies, four phenomenological and one a narrative inquiry. Finally, analysing 641 interview-based articles in accounting, Dai et al. [ 24 ] called for more rigor since a significant minority of studies did not report precise sample size.

Despite increasing attention to rigor in qualitative research (e.g. [ 52 ]) and more extensive methodological and analytical disclosures that seek to validate qualitative work [ 24 ], sample size reporting and sufficiency assessment remain inconsistent and partial, if not absent, across a range of research domains.

Objectives of the present study

The present study sought to enrich existing systematic analyses of the customs and practices of sample size reporting and justification by focusing on qualitative research relating to health. Additionally, this study attempted to expand previous empirical investigations by examining how qualitative sample sizes are characterised and discussed in academic narratives. Qualitative health research is an inter-disciplinary field that due to its affiliation with medical sciences, often faces views and positions reflective of a quantitative ethos. Thus qualitative health research constitutes an emblematic case that may help to unfold underlying philosophical and methodological differences across the scientific community that are crystallised in considerations of sample size. The present research, therefore, incorporates a comparative element on the basis of three different disciplines engaging with qualitative health research: medicine, psychology, and sociology. We chose to focus our analysis on single-per-participant-interview designs as this not only presents a popular and widespread methodological choice in qualitative health research, but also as the method where consideration of sample size – defined as the number of interviewees – is particularly salient.

Study design

A structured search for articles reporting cross-sectional, interview-based qualitative studies was carried out and eligible reports were systematically reviewed and analysed employing both quantitative and qualitative analytic techniques.

We selected journals which (a) follow a peer review process, (b) are considered high quality and influential in their field as reflected in journal metrics, and (c) are receptive to, and publish, qualitative research (Additional File  1 presents the journals’ editorial positions in relation to qualitative research and sample considerations where available). Three health-related journals were chosen, each representing a different disciplinary field; the British Medical Journal (BMJ) representing medicine, the British Journal of Health Psychology (BJHP) representing psychology, and the Sociology of Health & Illness (SHI) representing sociology.

Search strategy to identify studies

Employing the search function of each individual journal, we used the terms ‘interview*’ AND ‘qualitative’ and limited the results to articles published between 1 January 2003 and 22 September 2017 (i.e. a 15-year review period).

Eligibility criteria

To be eligible for inclusion in the review, the article had to report a cross-sectional study design. Longitudinal studies were thus excluded whilst studies conducted within a broader research programme (e.g. interview studies nested in a trial, as part of a broader ethnography, as part of a longitudinal research) were included if they reported only single-time qualitative interviews. The method of data collection had to be individual, synchronous qualitative interviews (i.e. group interviews, structured interviews and e-mail interviews over a period of time were excluded), and the data had to be analysed qualitatively (i.e. studies that quantified their qualitative data were excluded). Mixed method studies and articles reporting more than one qualitative method of data collection (e.g. individual interviews and focus groups) were excluded. Figure  1 , a PRISMA flow diagram [ 53 ], shows the number of: articles obtained from the searches and screened; papers assessed for eligibility; and articles included in the review (Additional File  2 provides the full list of articles included in the review and their unique identifying code – e.g. BMJ01, BJHP02, SHI03). One review author (KV) assessed the eligibility of all papers identified from the searches. When in doubt, discussions about retaining or excluding articles were held between KV and JB in regular meetings, and decisions were jointly made.

An external file that holds a picture, illustration, etc.
Object name is 12874_2018_594_Fig1_HTML.jpg

PRISMA flow diagram

Data extraction and analysis

A data extraction form was developed (see Additional File  3 ) recording three areas of information: (a) information about the article (e.g. authors, title, journal, year of publication etc.); (b) information about the aims of the study, the sample size and any justification for this, the participant characteristics, the sampling technique and any sample-related observations or comments made by the authors; and (c) information about the method or technique(s) of data analysis, the number of researchers involved in the analysis, the potential use of software, and any discussion around epistemological considerations. The Abstract, Methods and Discussion (and/or Conclusion) sections of each article were examined by one author (KV) who extracted all the relevant information. This was directly copied from the articles and, when appropriate, comments, notes and initial thoughts were written down.

To examine the kinds of sample size justifications provided by articles, an inductive content analysis [ 54 ] was initially conducted. On the basis of this analysis, the categories that expressed qualitatively different sample size justifications were developed.

We also extracted or coded quantitative data regarding the following aspects:

  • Journal and year of publication
  • Number of interviews
  • Number of participants
  • Presence of sample size justification(s) (Yes/No)
  • Presence of a particular sample size justification category (Yes/No), and
  • Number of sample size justifications provided

Descriptive and inferential statistical analyses were used to explore these data.

A thematic analysis [ 55 ] was then performed on all scientific narratives that discussed or commented on the sample size of the study. These narratives were evident both in papers that justified their sample size and those that did not. To identify these narratives, in addition to the methods sections, the discussion sections of the reviewed articles were also examined and relevant data were extracted and analysed.

In total, 214 articles – 21 in the BMJ, 53 in the BJHP and 140 in the SHI – were eligible for inclusion in the review. Table  1 provides basic information about the sample sizes – measured in number of interviews – of the studies reviewed across the three journals. Figure  2 depicts the number of eligible articles published each year per journal.

Descriptive statistics of the sample sizes of eligible articles across the three journals

An external file that holds a picture, illustration, etc.
Object name is 12874_2018_594_Fig2_HTML.jpg

Number of eligible articles published each year per journal 2

Pairwise comparisons following a significant Kruskal-Wallis 1 test indicated that the studies published in the BJHP had significantly ( p  < .001) smaller samples sizes than those published either in the BMJ or the SHI. Sample sizes of BMJ and SHI articles did not differ significantly from each other.

Sample size justifications: Results from the quantitative and qualitative content analysis

Ten (47.6%) of the 21 BMJ studies, 26 (49.1%) of the 53 BJHP papers and 24 (17.1%) of the 140 SHI articles provided some sort of sample size justification. As shown in Table  2 , the majority of articles which justified their sample size provided one justification (70% of articles); fourteen studies (25%) provided two distinct justifications; one study (1.7%) gave three justifications and two studies (3.3%) expressed four distinct justifications.

Number and percentage of ‘justifying’ articles and number of justifications stated by ‘justifying’ articles

There was no association between the number of interviews (i.e. sample size) conducted and the provision of a justification (rpb = .054, p  = .433). Within journals, Mann-Whitney tests indicated that sample sizes of ‘justifying’ and ‘non-justifying’ articles in the BMJ and SHI did not differ significantly from each other. In the BJHP, ‘justifying’ articles ( Mean rank  = 31.3) had significantly larger sample sizes than ‘non-justifying’ studies ( Mean rank  = 22.7; U = 237.000, p  < .05).

There was a significant association between the journal a paper was published in and the provision of a justification (χ 2 (2) = 23.83, p  < .001). BJHP studies provided a sample size justification significantly more often than would be expected ( z  = 2.9); SHI studies significantly less often ( z  = − 2.4). If an article was published in the BJHP, the odds of providing a justification were 4.8 times higher than if published in the SHI. Similarly if published in the BMJ, the odds of a study justifying its sample size were 4.5 times higher than in the SHI.

The qualitative content analysis of the scientific narratives identified eleven different sample size justifications. These are described below and illustrated with excerpts from relevant articles. By way of a summary, the frequency with which these were deployed across the three journals is indicated in Table  3 .

Commonality, type and counts of sample size justifications across journals

Saturation was the most commonly invoked principle (55.4% of all justifications) deployed by studies across all three journals to justify the sufficiency of their sample size. In the BMJ, two studies claimed that they achieved data saturation (BMJ17; BMJ18) and one article referred descriptively to achieving saturation without explicitly using the term (BMJ13). Interestingly, BMJ13 included data in the analysis beyond the point of saturation in search of ‘unusual/deviant observations’ and with a view to establishing findings consistency.

Thirty three women were approached to take part in the interview study. Twenty seven agreed and 21 (aged 21–64, median 40) were interviewed before data saturation was reached (one tape failure meant that 20 interviews were available for analysis). (BMJ17). No new topics were identified following analysis of approximately two thirds of the interviews; however, all interviews were coded in order to develop a better understanding of how characteristic the views and reported behaviours were, and also to collect further examples of unusual/deviant observations. (BMJ13).

Two articles reported pre-determining their sample size with a view to achieving data saturation (BMJ08 – see extract in section In line with existing research ; BMJ15 – see extract in section Pragmatic considerations ) without further specifying if this was achieved. One paper claimed theoretical saturation (BMJ06) conceived as being when “no further recurring themes emerging from the analysis” whilst another study argued that although the analytic categories were highly saturated, it was not possible to determine whether theoretical saturation had been achieved (BMJ04). One article (BMJ18) cited a reference to support its position on saturation.

In the BJHP, six articles claimed that they achieved data saturation (BJHP21; BJHP32; BJHP39; BJHP48; BJHP49; BJHP52) and one article stated that, given their sample size and the guidelines for achieving data saturation, it anticipated that saturation would be attained (BJHP50).

Recruitment continued until data saturation was reached, defined as the point at which no new themes emerged. (BJHP48). It has previously been recommended that qualitative studies require a minimum sample size of at least 12 to reach data saturation (Clarke & Braun, 2013; Fugard & Potts, 2014; Guest, Bunce, & Johnson, 2006) Therefore, a sample of 13 was deemed sufficient for the qualitative analysis and scale of this study. (BJHP50).

Two studies argued that they achieved thematic saturation (BJHP28 – see extract in section Sample size guidelines ; BJHP31) and one (BJHP30) article, explicitly concerned with theory development and deploying theoretical sampling, claimed both theoretical and data saturation.

The final sample size was determined by thematic saturation, the point at which new data appears to no longer contribute to the findings due to repetition of themes and comments by participants (Morse, 1995). At this point, data generation was terminated. (BJHP31).

Five studies argued that they achieved (BJHP05; BJHP33; BJHP40; BJHP13 – see extract in section Pragmatic considerations ) or anticipated (BJHP46) saturation without any further specification of the term. BJHP17 referred descriptively to a state of achieved saturation without specifically using the term. Saturation of coding , but not saturation of themes, was claimed to have been reached by one article (BJHP18). Two articles explicitly stated that they did not achieve saturation; instead claiming a level of theme completeness (BJHP27) or that themes being replicated (BJHP53) were arguments for sufficiency of their sample size.

Furthermore, data collection ceased on pragmatic grounds rather than at the point when saturation point was reached. Despite this, although nuances within sub-themes were still emerging towards the end of data analysis, the themes themselves were being replicated indicating a level of completeness. (BJHP27).

Finally, one article criticised and explicitly renounced the notion of data saturation claiming that, on the contrary, the criterion of theoretical sufficiency determined its sample size (BJHP16).

According to the original Grounded Theory texts, data collection should continue until there are no new discoveries ( i.e. , ‘data saturation’; Glaser & Strauss, 1967). However, recent revisions of this process have discussed how it is rare that data collection is an exhaustive process and researchers should rely on how well their data are able to create a sufficient theoretical account or ‘theoretical sufficiency’ (Dey, 1999). For this study, it was decided that theoretical sufficiency would guide recruitment, rather than looking for data saturation. (BJHP16).

Ten out of the 20 BJHP articles that employed the argument of saturation used one or more citations relating to this principle.

In the SHI, one article (SHI01) claimed that it achieved category saturation based on authors’ judgment.

This number was not fixed in advance, but was guided by the sampling strategy and the judgement, based on the analysis of the data, of the point at which ‘category saturation’ was achieved. (SHI01).

Three articles described a state of achieved saturation without using the term or specifying what sort of saturation they had achieved (i.e. data, theoretical, thematic saturation) (SHI04; SHI13; SHI30) whilst another four articles explicitly stated that they achieved saturation (SHI100; SHI125; SHI136; SHI137). Two papers stated that they achieved data saturation (SHI73 – see extract in section Sample size guidelines ; SHI113), two claimed theoretical saturation (SHI78; SHI115) and two referred to achieving thematic saturation (SHI87; SHI139) or to saturated themes (SHI29; SHI50).

Recruitment and analysis ceased once theoretical saturation was reached in the categories described below (Lincoln and Guba 1985). (SHI115). The respondents’ quotes drawn on below were chosen as representative, and illustrate saturated themes. (SHI50).

One article stated that thematic saturation was anticipated with its sample size (SHI94). Briefly referring to the difficulty in pinpointing achievement of theoretical saturation, SHI32 (see extract in section Richness and volume of data ) defended the sufficiency of its sample size on the basis of “the high degree of consensus [that] had begun to emerge among those interviewed”, suggesting that information from interviews was being replicated. Finally, SHI112 (see extract in section Further sampling to check findings consistency ) argued that it achieved saturation of discursive patterns . Seven of the 19 SHI articles cited references to support their position on saturation (see Additional File  4 for the full list of citations used by articles to support their position on saturation across the three journals).

Overall, it is clear that the concept of saturation encompassed a wide range of variants expressed in terms such as saturation, data saturation, thematic saturation, theoretical saturation, category saturation, saturation of coding, saturation of discursive themes, theme completeness. It is noteworthy, however, that although these various claims were sometimes supported with reference to the literature, they were not evidenced in relation to the study at hand.

Pragmatic considerations

The determination of sample size on the basis of pragmatic considerations was the second most frequently invoked argument (9.6% of all justifications) appearing in all three journals. In the BMJ, one article (BMJ15) appealed to pragmatic reasons, relating to time constraints and the difficulty to access certain study populations, to justify the determination of its sample size.

On the basis of the researchers’ previous experience and the literature, [30, 31] we estimated that recruitment of 15–20 patients at each site would achieve data saturation when data from each site were analysed separately. We set a target of seven to 10 caregivers per site because of time constraints and the anticipated difficulty of accessing caregivers at some home based care services. This gave a target sample of 75–100 patients and 35–50 caregivers overall. (BMJ15).

In the BJHP, four articles mentioned pragmatic considerations relating to time or financial constraints (BJHP27 – see extract in section Saturation ; BJHP53), the participant response rate (BJHP13), and the fixed (and thus limited) size of the participant pool from which interviewees were sampled (BJHP18).

We had aimed to continue interviewing until we had reached saturation, a point whereby further data collection would yield no further themes. In practice, the number of individuals volunteering to participate dictated when recruitment into the study ceased (15 young people, 15 parents). Nonetheless, by the last few interviews, significant repetition of concepts was occurring, suggesting ample sampling. (BJHP13).

Finally, three SHI articles explained their sample size with reference to practical aspects: time constraints and project manageability (SHI56), limited availability of respondents and project resources (SHI131), and time constraints (SHI113).

The size of the sample was largely determined by the availability of respondents and resources to complete the study. Its composition reflected, as far as practicable, our interest in how contextual factors (for example, gender relations and ethnicity) mediated the illness experience. (SHI131).

Qualities of the analysis

This sample size justification (8.4% of all justifications) was mainly employed by BJHP articles and referred to an intensive, idiographic and/or latently focused analysis, i.e. that moved beyond description. More specifically, six articles defended their sample size on the basis of an intensive analysis of transcripts and/or the idiographic focus of the study/analysis. Four of these papers (BJHP02; BJHP19; BJHP24; BJHP47) adopted an Interpretative Phenomenological Analysis (IPA) approach.

The current study employed a sample of 10 in keeping with the aim of exploring each participant’s account (Smith et al. , 1999). (BJHP19).

BJHP47 explicitly renounced the notion of saturation within an IPA approach. The other two BJHP articles conducted thematic analysis (BJHP34; BJHP38). The level of analysis – i.e. latent as opposed to a more superficial descriptive analysis – was also invoked as a justification by BJHP38 alongside the argument of an intensive analysis of individual transcripts

The resulting sample size was at the lower end of the range of sample sizes employed in thematic analysis (Braun & Clarke, 2013). This was in order to enable significant reflection, dialogue, and time on each transcript and was in line with the more latent level of analysis employed, to identify underlying ideas, rather than a more superficial descriptive analysis (Braun & Clarke, 2006). (BJHP38).

Finally, one BMJ paper (BMJ21) defended its sample size with reference to the complexity of the analytic task.

We stopped recruitment when we reached 30–35 interviews, owing to the depth and duration of interviews, richness of data, and complexity of the analytical task. (BMJ21).

Meet sampling requirements

Meeting sampling requirements (7.2% of all justifications) was another argument employed by two BMJ and four SHI articles to explain their sample size. Achieving maximum variation sampling in terms of specific interviewee characteristics determined and explained the sample size of two BMJ studies (BMJ02; BMJ16 – see extract in section Meet research design requirements ).

Recruitment continued until sampling frame requirements were met for diversity in age, sex, ethnicity, frequency of attendance, and health status. (BMJ02).

Regarding the SHI articles, two papers explained their numbers on the basis of their sampling strategy (SHI01- see extract in section Saturation ; SHI23) whilst sampling requirements that would help attain sample heterogeneity in terms of a particular characteristic of interest was cited by one paper (SHI127).

The combination of matching the recruitment sites for the quantitative research and the additional purposive criteria led to 104 phase 2 interviews (Internet (OLC): 21; Internet (FTF): 20); Gyms (FTF): 23; HIV testing (FTF): 20; HIV treatment (FTF): 20.) (SHI23). Of the fifty interviews conducted, thirty were translated from Spanish into English. These thirty, from which we draw our findings, were chosen for translation based on heterogeneity in depressive symptomology and educational attainment. (SHI127).

Finally, the pre-determination of sample size on the basis of sampling requirements was stated by one article though this was not used to justify the number of interviews (SHI10).

Sample size guidelines

Five BJHP articles (BJHP28; BJHP38 – see extract in section Qualities of the analysis ; BJHP46; BJHP47; BJHP50 – see extract in section Saturation ) and one SHI paper (SHI73) relied on citing existing sample size guidelines or norms within research traditions to determine and subsequently defend their sample size (7.2% of all justifications).

Sample size guidelines suggested a range between 20 and 30 interviews to be adequate (Creswell, 1998). Interviewer and note taker agreed that thematic saturation, the point at which no new concepts emerge from subsequent interviews (Patton, 2002), was achieved following completion of 20 interviews. (BJHP28). Interviewing continued until we deemed data saturation to have been reached (the point at which no new themes were emerging). Researchers have proposed 30 as an approximate or working number of interviews at which one could expect to be reaching theoretical saturation when using a semi-structured interview approach (Morse 2000), although this can vary depending on the heterogeneity of respondents interviewed and complexity of the issues explored. (SHI73).

In line with existing research

Sample sizes of published literature in the area of the subject matter under investigation (3.5% of all justifications) were used by 2 BMJ articles as guidance and a precedent for determining and defending their own sample size (BMJ08; BMJ15 – see extract in section Pragmatic considerations ).

We drew participants from a list of prisoners who were scheduled for release each week, sampling them until we reached the target of 35 cases, with a view to achieving data saturation within the scope of the study and sufficient follow-up interviews and in line with recent studies [8–10]. (BMJ08).

Similarly, BJHP38 (see extract in section Qualities of the analysis ) claimed that its sample size was within the range of sample sizes of published studies that use its analytic approach.

Richness and volume of data

BMJ21 (see extract in section Qualities of the analysis ) and SHI32 referred to the richness, detailed nature, and volume of data collected (2.3% of all justifications) to justify the sufficiency of their sample size.

Although there were more potential interviewees from those contacted by postcode selection, it was decided to stop recruitment after the 10th interview and focus on analysis of this sample. The material collected was considerable and, given the focused nature of the study, extremely detailed. Moreover, a high degree of consensus had begun to emerge among those interviewed, and while it is always difficult to judge at what point ‘theoretical saturation’ has been reached, or how many interviews would be required to uncover exception(s), it was felt the number was sufficient to satisfy the aims of this small in-depth investigation (Strauss and Corbin 1990). (SHI32).

Meet research design requirements

Determination of sample size so that it is in line with, and serves the requirements of, the research design (2.3% of all justifications) that the study adopted was another justification used by 2 BMJ papers (BMJ16; BMJ08 – see extract in section In line with existing research ).

We aimed for diverse, maximum variation samples [20] totalling 80 respondents from different social backgrounds and ethnic groups and those bereaved due to different types of suicide and traumatic death. We could have interviewed a smaller sample at different points in time (a qualitative longitudinal study) but chose instead to seek a broad range of experiences by interviewing those bereaved many years ago and others bereaved more recently; those bereaved in different circumstances and with different relations to the deceased; and people who lived in different parts of the UK; with different support systems and coroners’ procedures (see Tables 1 and 2 for more details). (BMJ16).

Researchers’ previous experience

The researchers’ previous experience (possibly referring to experience with qualitative research) was invoked by BMJ15 (see extract in section Pragmatic considerations ) as a justification for the determination of sample size.

Nature of study

One BJHP paper argued that the sample size was appropriate for the exploratory nature of the study (BJHP38).

A sample of eight participants was deemed appropriate because of the exploratory nature of this research and the focus on identifying underlying ideas about the topic. (BJHP38).

Further sampling to check findings consistency

Finally, SHI112 argued that once it had achieved saturation of discursive patterns, further sampling was decided and conducted to check for consistency of the findings.

Within each of the age-stratified groups, interviews were randomly sampled until saturation of discursive patterns was achieved. This resulted in a sample of 67 interviews. Once this sample had been analysed, one further interview from each age-stratified group was randomly chosen to check for consistency of the findings. Using this approach it was possible to more carefully explore children’s discourse about the ‘I’, agency, relationality and power in the thematic areas, revealing the subtle discursive variations described in this article. (SHI112).

Thematic analysis of passages discussing sample size

This analysis resulted in two overarching thematic areas; the first concerned the variation in the characterisation of sample size sufficiency, and the second related to the perceived threats deriving from sample size insufficiency.

Characterisations of sample size sufficiency

The analysis showed that there were three main characterisations of the sample size in the articles that provided relevant comments and discussion: (a) the vast majority of these qualitative studies ( n  = 42) considered their sample size as ‘small’ and this was seen and discussed as a limitation; only two articles viewed their small sample size as desirable and appropriate (b) a minority of articles ( n  = 4) proclaimed that their achieved sample size was ‘sufficient’; and (c) finally, a small group of studies ( n  = 5) characterised their sample size as ‘large’. Whilst achieving a ‘large’ sample size was sometimes viewed positively because it led to richer results, there were also occasions when a large sample size was problematic rather than desirable.

‘Small’ but why and for whom?

A number of articles which characterised their sample size as ‘small’ did so against an implicit or explicit quantitative framework of reference. Interestingly, three studies that claimed to have achieved data saturation or ‘theoretical sufficiency’ with their sample size, discussed or noted as a limitation in their discussion their ‘small’ sample size, raising the question of why, or for whom, the sample size was considered small given that the qualitative criterion of saturation had been satisfied.

The current study has a number of limitations. The sample size was small (n = 11) and, however, large enough for no new themes to emerge. (BJHP39). The study has two principal limitations. The first of these relates to the small number of respondents who took part in the study. (SHI73).

Other articles appeared to accept and acknowledge that their sample was flawed because of its small size (as well as other compositional ‘deficits’ e.g. non-representativeness, biases, self-selection) or anticipated that they might be criticized for their small sample size. It seemed that the imagined audience – perhaps reviewer or reader – was one inclined to hold the tenets of quantitative research, and certainly one to whom it was important to indicate the recognition that small samples were likely to be problematic. That one’s sample might be thought small was often construed as a limitation couched in a discourse of regret or apology.

Very occasionally, the articulation of the small size as a limitation was explicitly aligned against an espoused positivist framework and quantitative research.

This study has some limitations. Firstly, the 100 incidents sample represents a small number of the total number of serious incidents that occurs every year. 26 We sent out a nationwide invitation and do not know why more people did not volunteer for the study. Our lack of epidemiological knowledge about healthcare incidents, however, means that determining an appropriate sample size continues to be difficult. (BMJ20).

Indicative of an apparent oscillation of qualitative researchers between the different requirements and protocols demarcating the quantitative and qualitative worlds, there were a few instances of articles which briefly recognised their ‘small’ sample size as a limitation, but then defended their study on more qualitative grounds, such as their ability and success at capturing the complexity of experience and delving into the idiographic, and at generating particularly rich data.

This research, while limited in size, has sought to capture some of the complexity attached to men’s attitudes and experiences concerning incomes and material circumstances. (SHI35). Our numbers are small because negotiating access to social networks was slow and labour intensive, but our methods generated exceptionally rich data. (BMJ21). This study could be criticised for using a small and unrepresentative sample. Given that older adults have been ignored in the research concerning suntanning, fair-skinned older adults are the most likely to experience skin cancer, and women privilege appearance over health when it comes to sunbathing practices, our study offers depth and richness of data in a demographic group much in need of research attention. (SHI57).

‘Good enough’ sample sizes

Only four articles expressed some degree of confidence that their achieved sample size was sufficient. For example, SHI139, in line with the justification of thematic saturation that it offered, expressed trust in its sample size sufficiency despite the poor response rate. Similarly, BJHP04, which did not provide a sample size justification, argued that it targeted a larger sample size in order to eventually recruit a sufficient number of interviewees, due to anticipated low response rate.

Twenty-three people with type I diabetes from the target population of 133 ( i.e. 17.3%) consented to participate but four did not then respond to further contacts (total N = 19). The relatively low response rate was anticipated, due to the busy life-styles of young people in the age range, the geographical constraints, and the time required to participate in a semi-structured interview, so a larger target sample allowed a sufficient number of participants to be recruited. (BJHP04).

Two other articles (BJHP35; SHI32) linked the claimed sufficiency to the scope (i.e. ‘small, in-depth investigation’), aims and nature (i.e. ‘exploratory’) of their studies, thus anchoring their numbers to the particular context of their research. Nevertheless, claims of sample size sufficiency were sometimes undermined when they were juxtaposed with an acknowledgement that a larger sample size would be more scientifically productive.

Although our sample size was sufficient for this exploratory study, a more diverse sample including participants with lower socioeconomic status and more ethnic variation would be informative. A larger sample could also ensure inclusion of a more representative range of apps operating on a wider range of platforms. (BJHP35).

‘Large’ sample sizes - Promise or peril?

Three articles (BMJ13; BJHP05; BJHP48) which all provided the justification of saturation, characterised their sample size as ‘large’ and narrated this oversufficiency in positive terms as it allowed richer data and findings and enhanced the potential for generalisation. The type of generalisation aspired to (BJHP48) was not further specified however.

This study used rich data provided by a relatively large sample of expert informants on an important but under-researched topic. (BMJ13). Qualitative research provides a unique opportunity to understand a clinical problem from the patient’s perspective. This study had a large diverse sample, recruited through a range of locations and used in-depth interviews which enhance the richness and generalizability of the results. (BJHP48).

And whilst a ‘large’ sample size was endorsed and valued by some qualitative researchers, within the psychological tradition of IPA, a ‘large’ sample size was counter-normative and therefore needed to be justified. Four BJHP studies, all adopting IPA, expressed the appropriateness or desirability of ‘small’ sample sizes (BJHP41; BJHP45) or hastened to explain why they included a larger than typical sample size (BJHP32; BJHP47). For example, BJHP32 below provides a rationale for how an IPA study can accommodate a large sample size and how this was indeed suitable for the purposes of the particular research. To strengthen the explanation for choosing a non-normative sample size, previous IPA research citing a similar sample size approach is used as a precedent.

Small scale IPA studies allow in-depth analysis which would not be possible with larger samples (Smith et al. , 2009). (BJHP41). Although IPA generally involves intense scrutiny of a small number of transcripts, it was decided to recruit a larger diverse sample as this is the first qualitative study of this population in the United Kingdom (as far as we know) and we wanted to gain an overview. Indeed, Smith, Flowers, and Larkin (2009) agree that IPA is suitable for larger groups. However, the emphasis changes from an in-depth individualistic analysis to one in which common themes from shared experiences of a group of people can be elicited and used to understand the network of relationships between themes that emerge from the interviews. This large-scale format of IPA has been used by other researchers in the field of false-positive research. Baillie, Smith, Hewison, and Mason (2000) conducted an IPA study, with 24 participants, of ultrasound screening for chromosomal abnormality; they found that this larger number of participants enabled them to produce a more refined and cohesive account. (BJHP32).

The IPA articles found in the BJHP were the only instances where a ‘small’ sample size was advocated and a ‘large’ sample size problematized and defended. These IPA studies illustrate that the characterisation of sample size sufficiency can be a function of researchers’ theoretical and epistemological commitments rather than the result of an ‘objective’ sample size assessment.

Threats from sample size insufficiency

As shown above, the majority of articles that commented on their sample size, simultaneously characterized it as small and problematic. On those occasions that authors did not simply cite their ‘small’ sample size as a study limitation but rather continued and provided an account of how and why a small sample size was problematic, two important scientific qualities of the research seemed to be threatened: the generalizability and validity of results.

Generalizability

Those who characterised their sample as ‘small’ connected this to the limited potential for generalization of the results. Other features related to the sample – often some kind of compositional particularity – were also linked to limited potential for generalisation. Though not always explicitly articulated to what form of generalisation the articles referred to (see BJHP09), generalisation was mostly conceived in nomothetic terms, that is, it concerned the potential to draw inferences from the sample to the broader study population (‘representational generalisation’ – see BJHP31) and less often to other populations or cultures.

It must be noted that samples are small and whilst in both groups the majority of those women eligible participated, generalizability cannot be assumed. (BJHP09). The study’s limitations should be acknowledged: Data are presented from interviews with a relatively small group of participants, and thus, the views are not necessarily generalizable to all patients and clinicians. In particular, patients were only recruited from secondary care services where COFP diagnoses are typically confirmed. The sample therefore is unlikely to represent the full spectrum of patients, particularly those who are not referred to, or who have been discharged from dental services. (BJHP31).

Without explicitly using the term generalisation, two SHI articles noted how their ‘small’ sample size imposed limits on ‘the extent that we can extrapolate from these participants’ accounts’ (SHI114) or to the possibility ‘to draw far-reaching conclusions from the results’ (SHI124).

Interestingly, only a minority of articles alluded to, or invoked, a type of generalisation that is aligned with qualitative research, that is, idiographic generalisation (i.e. generalisation that can be made from and about cases [ 5 ]). These articles, all published in the discipline of sociology, defended their findings in terms of the possibility of drawing logical and conceptual inferences to other contexts and of generating understanding that has the potential to advance knowledge, despite their ‘small’ size. One article (SHI139) clearly contrasted nomothetic (statistical) generalisation to idiographic generalisation, arguing that the lack of statistical generalizability does not nullify the ability of qualitative research to still be relevant beyond the sample studied.

Further, these data do not need to be statistically generalisable for us to draw inferences that may advance medicalisation analyses (Charmaz 2014). These data may be seen as an opportunity to generate further hypotheses and are a unique application of the medicalisation framework. (SHI139). Although a small-scale qualitative study related to school counselling, this analysis can be usefully regarded as a case study of the successful utilisation of mental health-related resources by adolescents. As many of the issues explored are of relevance to mental health stigma more generally, it may also provide insights into adult engagement in services. It shows how a sociological analysis, which uses positioning theory to examine how people negotiate, partially accept and simultaneously resist stigmatisation in relation to mental health concerns, can contribute to an elucidation of the social processes and narrative constructions which may maintain as well as bridge the mental health service gap. (SHI103).

Only one article (SHI30) used the term transferability to argue for the potential of wider relevance of the results which was thought to be more the product of the composition of the sample (i.e. diverse sample), rather than the sample size.

The second major concern that arose from a ‘small’ sample size pertained to the internal validity of findings (i.e. here the term is used to denote the ‘truth’ or credibility of research findings). Authors expressed uncertainty about the degree of confidence in particular aspects or patterns of their results, primarily those that concerned some form of differentiation on the basis of relevant participant characteristics.

The information source preferred seemed to vary according to parents’ education; however, the sample size is too small to draw conclusions about such patterns. (SHI80). Although our numbers were too small to demonstrate gender differences with any certainty, it does seem that the biomedical and erotic scripts may be more common in the accounts of men and the relational script more common in the accounts of women. (SHI81).

In other instances, articles expressed uncertainty about whether their results accounted for the full spectrum and variation of the phenomenon under investigation. In other words, a ‘small’ sample size (alongside compositional ‘deficits’ such as a not statistically representative sample) was seen to threaten the ‘content validity’ of the results which in turn led to constructions of the study conclusions as tentative.

Data collection ceased on pragmatic grounds rather than when no new information appeared to be obtained ( i.e. , saturation point). As such, care should be taken not to overstate the findings. Whilst the themes from the initial interviews seemed to be replicated in the later interviews, further interviews may have identified additional themes or provided more nuanced explanations. (BJHP53). …it should be acknowledged that this study was based on a small sample of self-selected couples in enduring marriages who were not broadly representative of the population. Thus, participants may not be representative of couples that experience postnatal PTSD. It is therefore unlikely that all the key themes have been identified and explored. For example, couples who were excluded from the study because the male partner declined to participate may have been experiencing greater interpersonal difficulties. (BJHP03).

In other instances, articles attempted to preserve a degree of credibility of their results, despite the recognition that the sample size was ‘small’. Clarity and sharpness of emerging themes and alignment with previous relevant work were the arguments employed to warrant the validity of the results.

This study focused on British Chinese carers of patients with affective disorders, using a qualitative methodology to synthesise the sociocultural representations of illness within this community. Despite the small sample size, clear themes emerged from the narratives that were sufficient for this exploratory investigation. (SHI98).

The present study sought to examine how qualitative sample sizes in health-related research are characterised and justified. In line with previous studies [ 22 , 30 , 33 , 34 ] the findings demonstrate that reporting of sample size sufficiency is limited; just over 50% of articles in the BMJ and BJHP and 82% in the SHI did not provide any sample size justification. Providing a sample size justification was not related to the number of interviews conducted, but it was associated with the journal that the article was published in, indicating the influence of disciplinary or publishing norms, also reported in prior research [ 30 ]. This lack of transparency about sample size sufficiency is problematic given that most qualitative researchers would agree that it is an important marker of quality [ 56 , 57 ]. Moreover, and with the rise of qualitative research in social sciences, efforts to synthesise existing evidence and assess its quality are obstructed by poor reporting [ 58 , 59 ].

When authors justified their sample size, our findings indicate that sufficiency was mostly appraised with reference to features that were intrinsic to the study, in agreement with general advice on sample size determination [ 4 , 11 , 36 ]. The principle of saturation was the most commonly invoked argument [ 22 ] accounting for 55% of all justifications. A wide range of variants of saturation was evident corroborating the proliferation of the meaning of the term [ 49 ] and reflecting different underlying conceptualisations or models of saturation [ 20 ]. Nevertheless, claims of saturation were never substantiated in relation to procedures conducted in the study itself, endorsing similar observations in the literature [ 25 , 30 , 47 ]. Claims of saturation were sometimes supported with citations of other literature, suggesting a removal of the concept away from the characteristics of the study at hand. Pragmatic considerations, such as resource constraints or participant response rate and availability, was the second most frequently used argument accounting for approximately 10% of justifications and another 23% of justifications also represented intrinsic-to-the-study characteristics (i.e. qualities of the analysis, meeting sampling or research design requirements, richness and volume of the data obtained, nature of study, further sampling to check findings consistency).

Only, 12% of mentions of sample size justification pertained to arguments that were external to the study at hand, in the form of existing sample size guidelines and prior research that sets precedents. Whilst community norms and prior research can establish useful rules of thumb for estimating sample sizes [ 60 ] – and reveal what sizes are more likely to be acceptable within research communities – researchers should avoid adopting these norms uncritically, especially when such guidelines [e.g. 30 , 35 ], might be based on research that does not provide adequate evidence of sample size sufficiency. Similarly, whilst methodological research that seeks to demonstrate the achievement of saturation is invaluable since it explicates the parameters upon which saturation is contingent and indicates when a research project is likely to require a smaller or a larger sample [e.g. 29 ], specific numbers at which saturation was achieved within these projects cannot be routinely extrapolated for other projects. We concur with existing views [ 11 , 36 ] that the consideration of the characteristics of the study at hand, such as the epistemological and theoretical approach, the nature of the phenomenon under investigation, the aims and scope of the study, the quality and richness of data, or the researcher’s experience and skills of conducting qualitative research, should be the primary guide in determining sample size and assessing its sufficiency.

Moreover, although numbers in qualitative research are not unimportant [ 61 ], sample size should not be considered alone but be embedded in the more encompassing examination of data adequacy [ 56 , 57 ]. Erickson’s [ 62 ] dimensions of ‘evidentiary adequacy’ are useful here. He explains the concept in terms of adequate amounts of evidence, adequate variety in kinds of evidence, adequate interpretive status of evidence, adequate disconfirming evidence, and adequate discrepant case analysis. All dimensions might not be relevant across all qualitative research designs, but this illustrates the thickness of the concept of data adequacy, taking it beyond sample size.

The present research also demonstrated that sample sizes were commonly seen as ‘small’ and insufficient and discussed as limitation. Often unjustified (and in two cases incongruent with their own claims of saturation) these findings imply that sample size in qualitative health research is often adversely judged (or expected to be judged) against an implicit, yet omnipresent, quasi-quantitative standpoint. Indeed there were a few instances in our data where authors appeared, possibly in response to reviewers, to resist to some sort of quantification of their results. This implicit reference point became more apparent when authors discussed the threats deriving from an insufficient sample size. Whilst the concerns about internal validity might be legitimate to the extent that qualitative research projects, which are broadly related to realism, are set to examine phenomena in sufficient breadth and depth, the concerns around generalizability revealed a conceptualisation that is not compatible with purposive sampling. The limited potential for generalisation, as a result of a small sample size, was often discussed in nomothetic, statistical terms. Only occasionally was analytic or idiographic generalisation invoked to warrant the value of the study’s findings [ 5 , 17 ].

Strengths and limitations of the present study

We note, first, the limited number of health-related journals reviewed, so that only a ‘snapshot’ of qualitative health research has been captured. Examining additional disciplines (e.g. nursing sciences) as well as inter-disciplinary journals would add to the findings of this analysis. Nevertheless, our study is the first to provide some comparative insights on the basis of disciplines that are differently attached to the legacy of positivism and analysed literature published over a lengthy period of time (15 years). Guetterman [ 27 ] also examined health-related literature but this analysis was restricted to 26 most highly cited articles published over a period of five years whilst Carlsen and Glenton’s [ 22 ] study concentrated on focus groups health research. Moreover, although it was our intention to examine sample size justification in relation to the epistemological and theoretical positions of articles, this proved to be challenging largely due to absence of relevant information, or the difficulty into discerning clearly articles’ positions [ 63 ] and classifying them under specific approaches (e.g. studies often combined elements from different theoretical and epistemological traditions). We believe that such an analysis would yield useful insights as it links the methodological issue of sample size to the broader philosophical stance of the research. Despite these limitations, the analysis of the characterisation of sample size and of the threats seen to accrue from insufficient sample size, enriches our understanding of sample size (in)sufficiency argumentation by linking it to other features of the research. As the peer-review process becomes increasingly public, future research could usefully examine how reporting around sample size sufficiency and data adequacy might be influenced by the interactions between authors and reviewers.

The past decade has seen a growing appetite in qualitative research for an evidence-based approach to sample size determination and to evaluations of the sufficiency of sample size. Despite the conceptual and methodological developments in the area, the findings of the present study confirm previous studies in concluding that appraisals of sample size sufficiency are either absent or poorly substantiated. To ensure and maintain high quality research that will encourage greater appreciation of qualitative work in health-related sciences [ 64 ], we argue that qualitative researchers should be more transparent and thorough in their evaluation of sample size as part of their appraisal of data adequacy. We would encourage the practice of appraising sample size sufficiency with close reference to the study at hand and would thus caution against responding to the growing methodological research in this area with a decontextualised application of sample size numerical guidelines, norms and principles. Although researchers might find sample size community norms serve as useful rules of thumb, we recommend methodological knowledge is used to critically consider how saturation and other parameters that affect sample size sufficiency pertain to the specifics of the particular project. Those reviewing papers have a vital role in encouraging transparent study-specific reporting. The review process should support authors to exercise nuanced judgments in decisions about sample size determination in the context of the range of factors that influence sample size sufficiency and the specifics of a particular study. In light of the growing methodological evidence in the area, transparent presentation of such evidence-based judgement is crucial and in time should surely obviate the seemingly routine practice of citing the ‘small’ size of qualitative samples among the study limitations.

Additional Files

Editorial positions on qualitative research and sample considerations (where available). (DOCX 12 kb)

List of eligible articles included in the review ( N  = 214). (DOCX 38 kb)

Data Extraction Form. (DOCX 15 kb)

Citations used by articles to support their position on saturation. (DOCX 14 kb)

Acknowledgments

We would like to thank Dr. Paula Smith and Katharine Lee for their comments on a previous draft of this paper as well as Natalie Ann Mitchell and Meron Teferra for assisting us with data extraction.

This research was initially conceived of and partly conducted with financial support from the Multidisciplinary Assessment of Technology Centre for Healthcare (MATCH) programme (EP/F063822/1 and EP/G012393/1). The research continued and was completed independent of any support. The funding body did not have any role in the study design, the collection, analysis and interpretation of the data, in the writing of the paper, and in the decision to submit the manuscript for publication. The views expressed are those of the authors alone.

Availability of data and materials

Abbreviations, authors’ contributions.

JB and TY conceived the study; KV, JB, and TY designed the study; KV identified the articles and extracted the data; KV and JB assessed eligibility of articles; KV, JB, ST, and TY contributed to the analysis of the data, discussed the findings and early drafts of the paper; KV developed the final manuscript; KV, JB, ST, and TY read and approved the manuscript.

Ethics approval and consent to participate

Not applicable.

Consent for publication

Competing interests.

Terry Young is an academic who undertakes research and occasional consultancy in the areas of health technology assessment, information systems, and service design. He is unaware of any direct conflict of interest with respect to this paper. All other authors have no competing interests to declare.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

2 The publication of qualitative studies in the BMJ was significantly reduced from 2012 onwards and this appears to coincide with the initiation of the BMJ Open to which qualitative studies were possibly directed.

1 A non-parametric test of difference for independent samples was performed since the variable number of interviews violated assumptions of normality according to the standardized scores of skewness and kurtosis (BMJ: z skewness = 3.23, z kurtosis = 1.52; BJHP: z skewness = 4.73, z kurtosis = 4.85; SHI: z skewness = 12.04, z kurtosis = 21.72) and the Shapiro-Wilk test of normality ( p  < .001).

Contributor Information

Konstantina Vasileiou, Phone: +44 (0) 1225 383167, Email: [email protected] .

Julie Barnett, Email: [email protected] .

Susan Thorpe, Email: [email protected] .

Terry Young, Email: [email protected] .

  • Open access
  • Published: 13 May 2024

“ We might not have been in hospital, but we were frontline workers in the community ”: a qualitative study exploring unmet need and local community-based responses for marginalised groups in Greater Manchester during the COVID-19 pandemic

  • Stephanie Gillibrand 1 ,
  • Ruth Watkinson 2 ,
  • Melissa Surgey 2 ,
  • Basma Issa 3 &
  • Caroline Sanders 2 , 4  

BMC Health Services Research volume  24 , Article number:  621 ( 2024 ) Cite this article

160 Accesses

2 Altmetric

Metrics details

The response to the COVID-19 pandemic saw a significant increase in demand for the voluntary, community, faith and social enterprise (VCFSE) sector to provide support to local communities. In Greater Manchester (GM), the VCFSE sector and informal networks provided health and wellbeing support in multiple ways, culminating in its crucial supportive role in the provision of the COVID-19 vaccination rollout across the GM city region. However, the support provided by the VCFSE sector during the pandemic remains under-recognised. The aims of the study were to: understand the views and experiences of marginalised communities in GM during the COVID-19 pandemic; explore how community engagement initiatives played a role during the pandemic and vaccine rollout; assess what can be learnt from the work of key stakeholders (community members, VCFSEs, health-system stakeholders) for future health research and service delivery.

The co-designed study utilised a participatory approach throughout and was co-produced with a Community Research Advisory Group (CRAG). Focus groups and semi-structured interviews were conducted remotely between September-November 2021, with 35 participants from local marginalised communities, health and care system stakeholders and VCFSE representatives. Thematic framework analysis was used to analyse the data.

Local communities in GM were not supported sufficiently by mainstream services during the course of the COVID-19 pandemic, resulting in increased pressure onto the VCFSE sector to respond to local communities’ need. Community-based approaches were deemed crucial to the success of the vaccination drive and in providing support to local communities more generally during the pandemic, whereby such approaches were in a unique position to reach members of diverse communities to boost uptake of the vaccine. Despite this, the support delivered by the VCFSE sector remains under-recognised and under-valued by the health system and decision-makers.

Conclusions

A number of challenges associated with collaborative working were experienced by the VSCE sector and health system in delivering the vaccination programme in partnership with the VCFSE sector. There is a need to create a broader, more inclusive health system which allows and promotes inter-sectoral working. Flexibility and adaptability in ongoing and future service delivery should be championed for greater cross-sector working.

Peer Review reports

The response to the COVID-19 pandemic saw a significant increase in demand for the voluntary, community, faith and social enterprise (VCFSE) sector to provide support to local communities [ 1 , 2 ]. The role of communities was seen as crucial to supporting the pandemic response, to better mobilise public health pandemic responses and supportive health services [ 3 ]. VCFSE organisations nationally had to quickly mobilise to adapt their service offer to meet increased demand, new gaps in service provision and deliver services in different ways to address the challenges faced by local communities. These included loss of income and financial hardship, closure of schools and childcare, increased social isolation, digital exclusion, and increased mental health issues [ 4 ]. However, previous research has concluded that support provided by the voluntary sector during the pandemic has been under-recognised [ 5 ]. Some authors have explored the role that VCFSEs played at the national level, in supporting communities during the pandemic [ 4 , 5 , 6 ]. Yet, whilst it is well-known that tens of thousands of UK volunteers supported local vaccine delivery [ 7 ], no existing academic literature has explored the role of VCFSEs in supporting the vaccination rollout.

We focus on Greater Manchester (GM), where increased support from VCFSE organisations, including smaller, community-based networks, responded to increased demand from local communities and the NHS to provide key health and wellbeing-related services, including food and care packages for clinically vulnerable households, food bank services, support for people experiencing homelessness, mental health and domestic violence services and support to local community organisations [ 8 ]. This support culminated in the sector’s supportive role in the delivery of the COVID-19 vaccination rollout, in response to the need for mass immunisation across the region.

Over the last decade, the English health and care system has been evolving to integrate health and social care. A key focus is building closer working relationships between the NHS, local authorities and other providers– including the VCFSE sector– to deliver joined up care for communities [ 9 , 10 ]. To aid integration, a new model for organising health and care on different geographical footprints has been developed: Integrated Care Systems (ICSs), place-based partnerships and neighbourhood models. These collaborative partnerships bring together existing health and care organisations to coordinate health and care planning and delivery in a more integrated way and include councils, NHS provider trusts, Primary Care Networks, GP federations and health and care commissioners [ 11 ]. These new geographically-based partnerships have an emphasis on collaborative working beyond traditional health and care partners. This includes acknowledging the role that VCFSE organisations can have in supporting wider population wellbeing, particularly as part of multi-disciplinary neighbourhood teams embedded in local communities [ 12 ]. National guidance on the development of ICSs and place-based partnerships strongly encourages health and care leaders to include VCFSE organisations in partnership arrangements and embed them into service delivery [ 12 ]. In GM, the partnership working approach pre-dates the formal mandating of ICSs, with a combined authority which brings together the ten local authorities and an association of Clinical Commissioning Groups (CCGs) which represented health commissioners, and a VCFSE umbrella group which also operates as a joint venture to represent the sector’s interests at a GM level Footnote 1 . However, reorganisation to the ICS system may present new local challenges for the VCFSE sector to find a meaningful ‘seat at the table’. That withstanding, the COVID-19 pandemic coincided with the development of ICSs and place-based partnerships as arguably one of the earliest and most intense tests of partnership working across health and care organisations within the current policy landscape.

Here, we present findings from a co-designed qualitative research project, drawing on insights from 35 participants, including members of diverse communities in GM, VCFSE participants, and key decision-making health and care system stakeholders. The aims of the study were to: understand the views and experiences of marginalised communities in GM during the COVID-19 pandemic; explore how community engagement initiatives played a role during the pandemic and vaccine rollout; assess what can be learnt from the work of key stakeholders (including community members, VCFSEs, health and care system stakeholders) for future health research and service delivery. The rationale for the study developed from a related piece of work assessing inequalities in the COVID-19 vaccine uptake in GM [ 13 ]. At that time, there was little research on the experiences of under-served communities during the pandemic. As such, the public and stakeholder engagement for the related project identified a need for a qualitative workstream to explore more fully the drivers behind and context surrounding the vaccination programme in GM, centring also local communities’ experiences during the pandemic (explored in a related paper [ 14 ]).

In this paper, we examine the role the VCFSE sector played in supporting unmet needs for marginalised groups in GM during the COVID-19 pandemic and as part of the rapid rollout of the COVID-19 vaccination programme. We consider the opportunities and barriers that may influence the full integration of the VCFSE sector into health and care services in the future. This paper provides additional evidence around the role of local community-led support in the context of identified unmet needs from marginalised local communities. Whilst focused on GM, it provides an exemplar of the role of VCFSEs and community networks during the pandemic, with relevant learning for other regions and international settings with place-based partnerships.

Study design

The study utilised a participatory approach throughout and was co-designed and co-produced with a diverse Community Research Advisory Group (CRAG). The CRAG were members of local community groups who were disproportionately impacted by the COVID-19 pandemic, including one member who is a co-author on this paper. This included members of three VCFSE organisations working with specific ethnic minority communities including Caribbean and African, South Asian and Syrian communities.

CRAG members acted as champions for the research, supporting design of appropriate information and fostering connections for recruitment via their existing community networks. The strong partnerships built through our approach were crucial to enabling a sense of trust and legitimacy for the research amongst underserved communities invited to participate.

Interviews and focus groups took place between September-November 2021 and sought to explore: the context surrounding the rollout of the vaccination programme; key aspects of support delivered as part of the vaccination programme; the use of localised approaches to support vaccine delivery including engagement initiatives, as well as broader community-level responses to the COVID-19 pandemic; perceptions around barriers to vaccine uptake Footnote 2 ; experiences of local communities (including healthcare) during the pandemic Footnote 3 . During the data collection period, national pandemic restrictions were largely lifted with no restrictions on social distancing or limits to gatherings, and all public venues reopened. A self-isolation period of 10 days after a positive COVID-19 test remained a legal requirement, but self-isolation after contact with a positive case was not required if fully vaccinated [ 15 ]. By July 2021, every UK adult had been offered their first dose of the COVID-19 vaccine, with every adult offered both doses by mid-September 2021 [ 16 ]. By early September 2021, more than 92 million doses had been administered in the UK [ 15 ].

Interviews and focus groups were conducted by one member of the research team (SG) and were conducted remotely due to the pandemic, via Zoom and telephone calls. The limitations of undertaking remote qualitative research interviews are acknowledged in academic literature, including potential restrictions to expressing compassion and assessing the participant’s environment [ 17 , 18 ]. However, given the remaining prevalence of COVID-19 at the time of interview, it was judged that the ensuing risk posed by COVID-19 to both researchers and participants outweighed the potential drawbacks. Nevertheless, participants were offered face-to-face options if they were unable to participate remotely to maximise inclusion (although no participants chose to participate face-to-face).

Interviews and focus groups were audio recorded with an encrypted recorder and transcribed by a professional transcription service. Informed written consent to participate was taken prior to the interviews and focus groups. The average length of the interviews was 34 min and average length of the focus groups was 99 min. Two focus groups were co-facilitated by a CRAG member, a member of the local community who works for a mental health charity that supports local South Asian communities, who also provided translation support. In respect to authors positionality, coauthors SG, RW, MS and CS are university researchers in academic roles and had prior links to the CRAG members via a wider community forum (co-ordinated by the NIHR funded Applied Research Collaboration for Greater Manchester). The wider group met regularly to discuss and share learning regarding community experiences, community action and related research during the pandemic. BI is a member of the CRAG and a member of a local Syrian community.

Sampling & recruitment

The sampling strategy for community participants centred around groups that had been disproportionately affected by the COVID-19 pandemic in England, including ethnic minority groups, young adults, and those with long-term physical and mental health conditions. VCFSE participants included community and religious leaders, members of local community VCFSE organisations and smaller, informal community networks and groups from local communities. Health and care system stakeholders included local council workers and health and care system stakeholders (e.g. those organising the vaccination response in CCGs and GP Federations). Characteristics of the sample are provided in Table  1 . Overall, the study achieved a diverse sample of participants on the basis of gender and ethnicity.

A combination of purposive and snowballing sampling was used to recruit via pre-established links and connections to community networks and stakeholders to ensure the inclusion of specific seldom-heard groups. For example, members of African and Caribbean communities were recruited via a charity which supports the health of these groups, and members of South Asian communities were recruited via a mental health charity.

Quotes are described by respondent type (community member, VCFSE participant, health and care system stakeholder) and participant identifier number to maintain anonymity whilst providing important contextual detail.

Data analysis

We analysed the data using an adapted framework approach [ 19 ]. We adopted a framework approach to analysis as this is viewed as a helpful method when working within large multidisciplinary teams or when not all members of the team have experience of qualitative data analysis, as was the case within our team. This structured thematic approach is also considered valuable when handling large volumes of data [ 20 , 21 ] and was found to be a helpful way to present, discuss and refine the themes within the research team and CRAG meetings. We created an initial list of themes from coding four transcripts, and discussions with CRAG members: personal or family experiences/stories; work/education experiences; racism and racialised experiences; trust and mistrust; fear and anxiety; value of community/community approaches; access to services including healthcare; operational and logistical factors around vaccine rollout; communication and (mis)information. We used this set of themes and sub themes to code the remaining transcripts, including further inductively generated codes as analysis progressed, regularly discussing within the team.

We shared transcript coding amongst the study team, with one team member responsible for collating coded transcripts into a charting framework of themes/subthemes with illustrative transcript extracts. The themes were refined throughout the analysis period (November 2021-March 2022) with the research team and CRAG and were sense-checked with CRAG members and the wider study team, to synthesise a final iteration of the themes and sub-themes (see supplementary material). We present findings related to five overarching themes: (1) unmet needs of local communities during the pandemic: inaccessible care and distrust; (2) community-led approaches: social support and leadership to support services; (3) community led support to COVID-19 vaccination delivery; (4) operational and logistical barriers to community-based pandemic responses: challenges faced by the voluntary and community sector; (5) learning from the pandemic response in GM: trust building and harnessing community assets. Themes are discussed in more detail below.

Ethical approval

This study was approved by University of Manchester Ethics Committee (Proportionate University Research Ethics Committee) 24/06/21. Ref 2021-11646-19665.

Unmet needs of local communities during the pandemic: inaccessible care and distrust

The COVID-19 pandemic brought an unprecedented shift in the way NHS services could function due to social distancing and lockdown measures. Pressures included unprecedented demand on hospital capacity and infection control measures (within hospitals and across the NHS) which reduced workforce capacity. There were also staff shortages due to high levels of COVID-19 infection amongst NHS staff, and shortages in non-acute capacity due to staff re-deployment [ 22 , 23 ]. In an effort to reduce pressure on the NHS, the policy mantra “Protect the NHS” was coined as a keynote slogan from the early stages of the pandemic [ 24 ].

It is within this context that many community participants raised (spontaneously) that there was a general inability to access health services during the pandemic, including GP and specialist services.

when I tried to contact my doctor’s surgery I was on the call for over an hour, number 20, number 15. Then by the time I’m under ten I get cut off. And it happened continuously. I just couldn’t get through and I just gave up really…now it’s like a phone consultation before you can even go and see someone, and even for that you’re waiting two, three weeks. (1029, VCFSE participant)

This resulted in frustration amongst some community participants, who questioned the logic of “protecting the NHS”, seemingly at the expense of their health-related needs. This led to sentiments that other health needs were de-prioritised by decision-makers during the pandemic. It was felt that this logic was counter-productive and fell short of the principles of protecting the most vulnerable.

We were like it just didn’t matter, it could have been much more serious than just a cough or a cold, [] but the help was just not there” (1028, community participant). what about people who actually need to see a doctor so the very vulnerable ones that we’re supposed to be protecting. Yes, we’re protecting the NHS, I understand that, I said, but we’ve also got to protect all those vulnerable people that are out there that are actually isolated (1011, community participant).

Community participants described their fear of accessing healthcare service because of potential risks of catching the virus in these settings, and fear of insufficient care due to well-publicised pressures in NHS settings. Some VCFSE participants noted that the widely publicised pressures faced by the NHS, and heightened media and political attention around COVID-19 cases in health settings led to fear and anxiety Footnote 4 .

I didn’t go to the hospital because I was scared shitless whether I was going to come out alive from hospital.” (1023, community participant). …the number of people who didn’t access services when they should have done… They were either terrified they were going to go into hospital and catch COVID straightaway and die, or they were terrified that they were taking [the hospital space] away from someone else (2003, VCFSE participant).

Overall, this led to a strong sense that mainstream services were not supporting the needs of local communities. This was especially felt for those requiring specialist services (e.g. mental health or secondary services), and for those who had faced intersecting inequalities, such as health issues, language and digital/IT barriers, and newly settled refugees and immigrants.

Community-led approaches: social support and leadership to support services

As a consequence of this unmet need, VCFSE and community participants identified that local communities themselves increased activities to provide community support. Participants felt strongly that this increased support provided by the VCFSE sector and community networks remains under-recognised and under-valued by the health system and wider public.

BAME organisations were going around door to door, giving hand sanitisers, giving masks to everybody [ ]. And it was the BAME community that was the most active during COVID delivering medication, delivering food to houses, doing the shopping. [ ] Nobody gave credit to that. Nobody talks about the good work that the BAME community has done. (1020, community participant)

A number of community and VCFSE sector participants highlighted the work done at the community level, by either themselves or other networks to support local communities. This included providing support packages, running errands for vulnerable community members, cooking and food shopping services, a helpline and communication networks for local communities, and online wellbeing and support groups.

We might not have been in hospital, but we were frontline workers in the community. (1028, community participant)

Support was provided by formal VCFSE organisations and by smaller, sometimes informal, community networks and channels, in which support mechanisms included mental health support and wellbeing focused communications to combat loneliness and boost wellbeing. This was often focused around outreach and the provision of community-based support to the most marginalised and vulnerable groups that had been disproportionately impacted during the pandemic, e.g. recently settled refugees and asylum seekers, older individuals.

We have an Iranian group in Salford…And one of them spotted this young woman in the queue and she thought she looked Iranian, you know….anyway she started a conversation, and this person had been an asylum seeker at the beginning of the pandemic and had been in a detention centre during the pandemic. And then, finally got their leave to remain and then were just basically dumped in Salford. [ ] just having that friendly face and someone was trying to start that conversation, she was able to be linked into this group of women who support other refugees and asylum seekers from the Middle East. (2014, VCFSE participant)

Community led support to COVID-19 vaccination delivery

The VCFSE sector and community networks also played a crucial part in supporting the COVID-19 vaccine delivery. Community, VCFSE and system-sector participants recognised the unique role that the VCFSE sector had played in reaching diverse communities and sections of communities not reached by the mainstream vaccination programme. For example, VCFSE groups aided vaccine delivery by helping run vaccine ‘pop-up’ sites in community spaces including mosques and other religious sites, children’s centres, and local specialist charities (e.g.: refugee and sex worker charities).

The use of community ‘champions’ and community ‘connectors’ to convey messaging around the vaccination drive were deemed especially vital in this regard. Trusted members of communities (e.g. community leaders) who had crucial pre-existing communication channels were able to effectively interact with different parts of communities to advocate for the vaccine and address misinformation. Situated within communities themselves, these ‘champions’ held established trust within communities, allowing conversations surrounding the vaccine to be held on the basis of shared experiences, honesty, openness, compassion and understanding.

So, as with any ethnic minority community, unless you’re part of it, it’s almost impossible to completely dig out all its norms and its very, very fine distinctions…[ ] what is acceptable, what is not acceptable[ ]? Unless you’re part of it, or you’ve really immersed yourself in the culture for decades, it’s almost impossible to get it (2015, VCFSE participant) One of the strongest approaches that you can take to increase uptake in any community, whether it be pregnant women or a faith group or a geographical area or a cultural group, is that if you’ve got a representative from that community leading on and advocating for the vaccine, you’re going to have the best impact (2011, health and care system stakeholder participant). unless Imams or significant people in the community were coming out for them and saying, it’s absolutely fine, it’s safe, and culturally it’s the right thing to do, there was a bit of uncertainty there (2010, health and care system stakeholder participant).

Health and care system stakeholders also emphasised the importance of “community ownership” of vaccination approaches, and of system responsiveness to identified needs and priorities at the community level. Health and care system stakeholders recognised that they were able to utilise community links to have better on-the-ground knowledge, provided in real time, to supplement locally held data to inform targeted efforts to boost uptake. This included council led initiatives including door-knocking with council staff, local health improvement practitioners, and VCFSE representatives working together to provide information about vaccine clinics and register people for vaccine appointments.

if messages went out and they didn’t land right they [the VCFSE sector] could be the first people [that] would hear about that and they could feed that back to us. [ ]….we were able to regularly go to them and say, look from a geographical perspective we can see these key areas…[ ] the people aren’t coming for vaccinations, [ ] what more can you tell us. Or, we can say, from these ethnicities in this area we’re not getting the numbers, what more can you tell us. And when we’ve fed them that intelligence then they could then use that to go and gain further insight for us, so they were a kind of, key mechanism (2010, health and care system participant).

Operational and logistical barriers to community-based pandemic responses: challenges faced by the voluntary and community sector

VCFSE sector and health and care system stakeholder participants reported significant logistical barriers to partnership working to support communities during the pandemic. Barriers included red tape and bureaucracy, which delayed responses to communities’ health and wellbeing needs.

whilst we were buying masks and hand sanitisers and going door to door, [ ] the council were still getting their paperwork in order, their policies in order, it was meeting after meeting. It took them seven to eight weeks for them to say [ ] we’ve got masks, would you like to help dish them out. (1029, VCFSE participant)

VCFSE and health and care system participants also raised challenges with respect to the VCFSE sector supporting the vaccination programme. This resulted in frustration amongst both VCFSE and health and care system participants who recognised the value of these community-based approaches.

The time that trickles through to the council and the time that the council turn around and say all right, we’ll actually let you do it was weeks later, and the community is turning round to us and saying to us well, what’s going on? We don’t like being messed around like this… (2008, VCFSE participant).

Participants highlighted the numerous health-related bodies with various roles which comprise a complex system for VCFSE partners to navigate, in part due to organisational and cultural clashes. Frustration was felt by both VCFSE and health and care system stakeholder participants (from local councils) in this respect. One VCFSE participant discussing the vaccine rollout noted:

We hit dead end after dead end within the council and there was literally very little response….You’ve got so many departments within this massive organisation called the council…[ ].it’s very difficult to navigate all that and deal with all that bureaucracy… (2008, VCFSE participant).

Broader institutional and organisational barriers to VCFSE support were identified, where cultural clashes between differing values and ways of working emerged, including ethos surrounding risk aversion and the system-level commitment to privilege value-for-money during the vaccination rollout. More practical issues around information governance and training were also raised as barriers to collaborative working.

I don’t think that they understand the power of community and the way community works. I don’t think that at a governmental level they understand what it means to penetrate into a community and actually understand what needs to be done to help a community…[ ] If they did and they had better links and ties into understanding that and helping that then we likely wouldn’t have had so many hurdles to get through (2008, VCFSE participant). ….in terms of public money, this is a public programme, we need to get value for the public pound. So we’re saying to [VCFSE organisation], how much is it going to cost? And [VCFSE organisation] are like, well, we don’t really know, until we deliver it. And we’re like, well, we can’t really approve it, until we know what it’s going to cost…. (2006, health and care system stakeholder participant)

Overall, these issues surmounted to difficulties of power-sharing between public sector organisations and VCFSEs during a time of rapid response to a public health crisis, political, institutional, and other external pressures. This was echoed amongst VCFSE and health and care system stakeholder participants, where frustration towards this was felt from both sides.

the public sector [ ] need to get better at letting go of some of the control. So even still, after I said, so many times, [VCFSE organisation] are delivering this, [VCFSE organisation] are doing everything, [ ] I still got the comms team going, are we doing a leaflet? No, [VCFSE organisation] are doing it, this is a [VCFSE organisation] programme, this isn’t a Council programme. (2006, local authority participant) it is difficult sometimes working with organisations, I find myself very much stuck in the middle sometimes [ ] I engage with [community groups] and ask them how best we do it and then we put things in place that they’ve asked for, and then they’ve told us it’s not working why have you done it like that. [ ] I think it’s acknowledgement to do it right, it takes time, and it takes effort, it takes resource. (2010, local authority participant)

Health and care system stakeholders also highlighted the importance of accessibility and localised vaccination hubs to reach different parts of diverse local communities e.g. sites in local mosques and sites near local supermarkets to reach different demographics. For instance, having mobile vaccination sites to reduce accessibility barriers, alongside dialogue-based initiatives to answer questions and respond to concerns from local communities about the vaccine, with the view to building trust without explicit pressure to receive the vaccine. Describing their efforts to engage with a member of the local community over the vaccine, two local health and care system stakeholders detailed the following example of how localised, communication-based approaches were deemed successful:

She came to the clinic and there were a lot of tears. It was very emotional. She’d been through a very difficult journey and had got pregnant by IVF, so it was a big decision for her, a big risk that she thought she was taking. Whether she took the vaccine or not, it felt like a risk to her, [ ] we were able to sit down and talk to her. We had some peers there. So we had other pregnant women there who’d had the vaccine, that were able to give her some confidence. We had the specialist multicultural midwife there, [ ] And we literally just sat and drank coffee with her and let her talk and she ended up agreeing to have the vaccine [ ] (2011, system-level stakeholder). …And the feedback from that lady was amazing. A couple of weeks ago I contacted her to make sure she was going to come down for her booster and she was just so grateful. [ ] she’d had backlash from her family and people within her community for taking up the vaccine and they still thought it was a massive risk. But she had no doubts that she’d done absolutely the right thing… (2012, system-level stakeholder).

Learning from the pandemic response in GM: trust building and harnessing community assets

Taking these findings from health and care system stakeholders, community and VCFSE participants, several learning points were identified.

In terms of vaccine delivery, some health and care system stakeholder participants reflected the need for more joined-up ways of working, across existing services and amongst VCFSE partners, to ensure efficiency and maximise uptake by embedding the vaccination programmes into other health services. For example, offering vaccination through health visiting or health checks, or offering COVID-19 vaccine boosters and flu vaccinations in single visits at care homes. These settings could also provide opportunities for dialogue with local communities where there is pushback against vaccination. Another health and care system stakeholder identified the need for greater joined up delivery of services; utilising the VCFSE sector to deliver multiple services simultaneously, including the vaccine, to improve vaccine uptake and access to other healthcare services:

the sex worker clinic is a good example of that. [ ] People were coming in for another reason, to get their health check and to get their support from the advisors there at that voluntary organisation, [ ]…if there’s a multiple purpose at the site, for people to attend, you can start to engage them in the conversation and then take the opportunity and vaccinate them. So I’m really interested in looking at that a little bit more, about how that can help to increase uptake. (2011, health and care system stakeholder participant)

A VCFSE participant suggested using educational settings such as schools as a channel to disseminate public health and vaccine-related information, as trusted settings which have wide-reach to many different communities.

A number of health and care system stakeholders, VCFSE and community participants noted that long-term, continuous, meaningful engagement is crucial to build longer-term trust between institutions and communities, and to improve the efficacy of public health measures. It was felt that more concentrated efforts were required from the NHS and other statutory organisations to reach the most marginalised and minoritised communities, for example through door-knocking and welfare calls. Participants highlighted that this was required not solely at times of public health crises, but as part of continued engagement efforts, in order to adequately engage with the most marginalised groups and effectively build long-term trust. This may be done most effectively by building on existing links to marginalised communities, for example using education liaison staff to understand traveller communities’ perspectives on the vaccine.

proactive engagement with communities both locally and nationally to say, [the health system] are looking at this, what’s people’s thoughts, views, you know, is there any issues with this, what more can we do, what do you need to know to make an informed decision. This is what we were thinking of, how would this land…I think we could learn by, [ ] doing that insight work, spending more time working with communities at a kind of, national, regional, and local level (2010, health and care system stakeholder participant). [the health system] could have engaged better with communities, I think bringing them in at the beginning. So, having them sat around the table, representatives from different groups, understanding how to engage with them from the very beginning…I think they could have used the data very very early on to inform who were engaging. We didn’t quite get it right at the beginning, we didn’t link the public health data teams with the comms and engagement teams (2013, health and care system stakeholder participant).

The tone of communications was also seen to be important. One health and care system stakeholder participant noted that the strategy of pushing communications and public health messaging aimed at behavioural change did not achieve the desired effect as these did not engage effectively with the communities to alleviate or address key concerns about the vaccine. These were deemed less successful than starting from a place of understanding and openness to generate constructive dialogue which could foster trust and respect.

There was also more specific learning identified in terms of collaboration between public sector institutions, VCFSEs and community links, with this seen as vital to build strong, long-term relationships between sectors based on trust and mutual respect. This should also involve working to share knowledge between sectors in real-time.

Health and care system stakeholder and VCFSE participants both suggested a failure to further develop partnerships fostered during the pandemic would be a lost opportunity that could potentially create distrust and additional barriers between communities, VCFSEs and public organisations, perhaps further marginalising seldom-heard groups.

we need to find ways which we have ongoing engagement, and I think it needs to be more informal. People don’t want to be just constantly asked and asked and asked (2010, health and care system stakeholder participant). a network of just sharing information and insight, rather than just engaging when you’ve got something specific to engage about. (2010, health and care system stakeholder participant) We were then thinking to ourselves, well, maybe we shouldn’t be doing this. If it’s going to cause us damage, if the council can’t work with us properly maybe we just shouldn’t do it. We’ve got to weigh up. We don’t want to lose our trust within the community (2008, VCFSE participant).

In terms of dynamics and working arrangements between sectors, participants thought it important to allow community organisations and VCFSEs to lead on their areas of speciality, e.g.: community organisations leading on outreach and communications within and to communities. This relates to the identified need of pursuing adaptable and flexible approaches to vaccine delivery. Moreover, there is a need to allow more joined-up decision-making between the health system and VCFSEs to ensure better use of local intelligence and improved planning.

Discussion & policy implications

Unmet need and the role of communities during the pandemic.

Our findings clearly demonstrate that local communities were not supported sufficiently by mainstream services during the COVID-19 pandemic. This in turn led to frustration, fear and loss of faith in the healthcare system as a whole, evidenced also in responses to the COVID-19 vaccination programme in which distrust results from wider experiences of historical marginalisation and structural inequalities [ 14 ]. In the absence of mainstream service support, our findings demonstrate how VCFSE organisations and community networks mobilised to support local communities to fulfil unmet health, social care, and wellbeing needs. This supports emerging evidence from across England which finds that the VCFSE sector played a key role in supporting communities during the pandemic [ 6 , 8 , 25 ].

The importance of community-based, localised approaches, community-led and community owned initiatives, ‘community champions’ and community connectors’ were also highlighted as crucial to the success of the COVID-19 vaccination drive. Participants noted that community-led approaches were uniquely positioned to reach some communities when mainstream approaches were unsuccessful. This is echoed in existing literature, where the role of localised community responses was deemed important to reach marginalised groups, as part of the wider pandemic response [ 26 ].

Operational and logistical barriers

Operational and logistical barriers created dissonance between communities and the system. These barriers included difficulties with decision-making and power-sharing between VCFSE and commissioning or clinical organisations, organisational cultural clashes, red-tape and bureaucracy, and complex systems and power structures to navigate. This builds on existing evidence of barriers to partnership working during the pandemic, including cultural clashes and bureaucracy/red tape [ 5 , 27 ]. The VCFSE sector also suffered from the closure of services, and reduced funding and resources due to increased demand for services and needing to adapt service provision [ 8 ].

These factors hindered collaborative working and created risk for VCFSEs, including putting tension on relationships with local communities resulting from delays implementing services. In most VCFSE-health system partnerships, participants noted that power is generally held by the health system partner, but reputational risk and additional resource-based costs lie with VCFSE partners. Supporting capacity building and workforce resource within the voluntary sector will strengthen this [ 28 ].

Inadequate processes to establish collaborative working enhance distrust between the health system and VCFSE sector, which in turn enhances difficulties for collaborative working. Trust is an important factor in how the system interacts with VCFSEs, with a lack of trust leading to further bottlenecks in VCFSE activities [ 29 ]. Alongside this, is the need for greater health system appreciation for the VCFSE sector, with VSCE partners reporting they faced greater scrutiny and more arduous administrative processes than private sector partners [ 2 , 29 ].

Learning from the pandemic: service prioritisation

All sectors of the health and care system face pressures from resource shortages, internal and external targets [ 30 , 31 ]. This is often linked to drives to increase the value-for-money of services, but key questions remain as to how to assimilate the goals of achieving health equity within value-for-money objectives [ 32 ]. To this end, prioritising value-for-money may come at odds with reducing health inequities. For example, during the rollout of the vaccination programme, additional resources and innovative approaches were required to reach marginalised communities [ 33 , 34 ]. This is supported by emerging evidence from England and internationally that efforts to drive vaccination uptake and reduce inequities in uptake amongst marginalised populations require significant resources and a breadth of approaches to maximise uptake [ 34 ]. Our findings suggest that changes in vaccine uptake were smaller and slower to be realised in these populations, resulting in a “slow burn” in terms of demonstrating quantifiable outcomes. Given the NHS principles of equity [ 10 , 35 ], reaching these groups should remain a public health priority, and failure to prioritise these groups may incur greater long-term financial costs resulting from greater health service needs. Our findings support that challenging entrenched attitudes and frameworks for how success is measured and adapting structures to better incentivise targeted interventions for marginalised or high-risk groups is essential to prioritising addressing unmet needs amongst marginalised communities.

The changing commissioning landscape

The development of ICSs and place-based partnerships has changed how health and care services are commissioned. National guidance encourages health and care leaders to include VCFSE organisations in partnership arrangements and embed them into service delivery [ 12 ], with ‘alliance models’ between ICSs and the VCFSE sector [ 36 ] established in certain regions (see for example [ 37 ]. However, this rests on “a partnership of the willing” [ 37 ] between ICS partners and VCFSE sector players, and concrete guidance for achieving collaborative working in practice, is lacking. As the findings in this paper point to, evolving decision-making processes may add to resource burdens for VCFSE organisations. Traditional health and care partners such as the NHS and local authorities should consider how their ways of working may need to change to foster full VCFSE inclusion on an equal standing, otherwise only the VCFSE stakeholders with sufficient capacity and resource may be able to be meaningfully involved.

Creating a VCFSE-accessible health and care system

In terms of fostering relationships between different sectors, participants acknowledged that pre-pandemic efforts to engage communities and community networks and VCFSEs were insufficient, with more meaningful, well-resourced engagement required going forward. It was also identified by participants the importance of avoiding tokenistic involvement of the VCFSE sector, which may be counter-productive for developing meaningful long-term partnerships. More equal relationships between statutory and VCFSE sectors are needed to foster improved collaborative working [ 5 , 38 ], and this is identified already at the GM level [ 28 ]. Central to this is actioned principles of co-design, including power-sharing, community ownership and trust. In order for co-design strategies to be successful, recognition of the role of the VCFSE sector and their ownership of approaches must be championed within co-design strategies and the enactment of co-designed activities.

Relatedly, greater trust of the VCFSE sector to deliver services effectively and efficiently is needed from health and social care decision-makers to ensure that funding compliance measures and processes are proportionate and not overly burdensome, to avoid funding bottlenecks which in turn impact service delivery [ 2 ]. Currently at the national level, VCFSE applicants typically only become aware of funding through existing networks, leaving less-connected organisations to find out ‘by chance’, thereby limiting reach amongst other organisations [ 2 ]. This may be especially true for smaller or ad-hoc VCFSE networks and groups. Our findings support that bottlenecks to applying for funding should be removed, and more streamlined processes for accessing funding championed [ 2 ].

Our findings also suggest that health systems should engage with the full breadth of the VCFSE sector, creating space for the involvement of smaller scale and less formal organisations as partners. Sharing of best practice and advice for adapting to local contexts should be promoted, alongside evaluation of collaborative models.

Finally, the pandemic period saw unprecedented state-sponsored investment into the VCFSE sector [ 29 ]. Within the GM context, this funding enabled VCFSEs to develop organisational capacity and systems, develop new partnerships, and better respond to the (unmet) needs of local communities [ 39 ]. Currently there are no clear plans to maintain this investment, but sustained inter-sector partnership working will require continued investment in the VCFSE sector.

Strengths & limitations

There are two main limitations to this study. Firstly, whilst the study achieved diversity in its sample, we could not achieve representation across all marginalised communities and therefore could not cover the experiences of all marginalised communities in-depth. As such, whilst the analyses provides valuable insights, such insights may not be transferrable and do not reflect all communities in GM. Secondly, whilst other studies focused on multiple city-regions or areas, our study is limited to the city region of GM. However, this focus provides an in-depth analysis on one region, and, as we discuss in the framing of the paper, we contend that the analysis presented in this paper serves as an exemplar to explore further at the national and international level. It should also be noted that co-design approaches are inevitably time and resource-heavy, and this was challenging in the context of this study, as local stakeholders wanted timely insights to inform the vaccination programme. However, one of the key strengths of our participatory approach was that this enabled a direct connection with the experiences of communities as relevant to the research, in order to shape the research questions, as well as the design and conduct of the study.

Overall, the contribution of the VCFSE sector during the pandemic is clear, with significant support provided in respect to community health and wellbeing and vaccination delivery. Nevertheless, there remains much to learn from the pandemic period, with the potential to harness capacity to tackle inequalities and build trust through shared learning and greater collaborative working. Maintaining an environment in which VCFSE partners are under-recognised, under-valued, and seemingly face further bureaucratic barriers will only exacerbate issues to collaborative working. There are also significant questions around systemic issues and sustainability, which must be addressed to overcome existing barriers to collaborative working between sectors. For instance, our findings identify the importance of flexibility and adaptability, in ongoing and future service delivery. Where this is not pursued this may not only impact service delivery but also create roadblocks to collaboration between sectors, creating divisions between entities whilst ultimately trying to effect change on similar goals (i.e. improved population health). ICS–VCFSE Alliances and community connectors may be a mechanism to promote this, but clear, actionable guidance will be required to translate rhetoric to real-world progress.

Data availability

Data for this research data will not be made publicly available as individual privacy could be compromised. Please contact Stephanie Gillibrand ([email protected]) for further information.

10 GM is an umbrella group which seeks to represent the VCSE sector in GM. More information is available here: https://10gm.org.uk/ .

These themes are explored in a related paper by Gillibrand et al. [ 14 ].

Topic guides are provided as supplementary material.

Distrust was also raised in relation to fear and anxiety in NHS settings, and this is discussed in detail in a related paper from this study by Gillibrand et al. [ 14 ].

Abbreviations

Clinical Commissioning Groups

Community Research Advisory Group

Greater Manchester

Integrated Care Systems

Voluntary, Community and Social Enterprise

Craston MRB. Susan Mackay, Daniel Cameron, Rebecca Writer-Davies, Dylan Spielman. Impact Evaluation of the Coronavirus Community Support Fund. 2021.

NatCen Social Research. Evaluation of VCSE COVID-19 Emergency Funding Package. Department for Digital, Culture, Media & Sport (DCMS); 2022. 27 April 2022.

Marston CRA, Miles S. Community participation is crucial in a pandemic. Lancet. 2020;395(10238):1676–8.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Frost S, Rippon S, Gamsu M, Southby K, Bharadwa M, Chapman J. Space to Connect Keeping in Touch sessions: A summary write up (unpublished). Leeds: Leeds Beckett University; 2021 2021.

Pilkington G, Southby K, Gamsu M, Bagnall AM, Bharadwa M, Chapman J, Freeman C. Through different eyes: How different stakeholders have understood the contribution of the voluntary sector to connecting and supporting people in the pandemic.; 2021.

Dayson CaW A. Capacity through crisis: The role and contribution of the VCSE Sector in Sheffield during the COVID-19 pandemic; 2021.

Timmins B. The COVID-19 vaccination programme: trials, tribulations and successes. The Kings Fund; 2022.

Howarth M, Martin P, Hepburn P, Sheriff G, Witkam R. A Realist evaluation of the state of the Greater Manchester Voluntary, Community and Social Enterprise Sector 2021. GMCVO/University of Salford; 2021.

NHS England. Five Year Forward View. Leeds. 2014 October 2014.

NHS England. The NHS Long Term Plan. NHS England. 2019 January 2019.

Surgey M. With great power: Taking responsibility for integrated care. 2022.

NHS England. Integrating care: next steps to building strong and effective integrated care systems across England. Leeds: NHS England; 2020.

Google Scholar  

RE W. Ethnic inequalities in COVID-19 vaccine uptake and comparison to seasonal influenza vaccine uptake in Greater Manchester, UK: a cohort study. PLoS Med. 2022;19(3).

Gillibrand S, Kapadia D, Watkinson R, Issa B, Kwaku-Odoi C, Sanders C. Marginalisation and distrust in the context of the COVID-19 vaccination programme: experiences of communities in a northern UK city region. BMC Public Health. 2024;24(1):853.

Article   PubMed   PubMed Central   Google Scholar  

Cabinet Office. COVID-19 response: autumn and Winter Plan 2021. Guidance: GOV.UK; 2021.

Department of Health and Social Care. Every adult in UK offered COVID-19 vaccine [press release]. GOV.UK, 19 July 2021 2021.

Irani E. The Use of Videoconferencing for qualitative interviewing: opportunities, challenges, and considerations. Clin Nurs Res. 2019;28(1):3–8.

Article   PubMed   Google Scholar  

Seitz S. Pixilated partnerships, overcoming obstacles in qualitative interviews via Skype: a research note. Qualitative Res. 2016;16(2):229–35.

Article   Google Scholar  

Gale NK. Using the framework method for the analysis of qualitative data in multi-disciplinary health research. BMC Med Res Methodol. 2013;13(1):1–8.

Castleberry A, Nolen A. Thematic analysis of qualitative research data: is it as easy as it sounds? Curr Pharm Teach Learn. 2018;10(6):807–15.

Braun V, Clarke V. Thematic analysis. APA handbook of research methods in psychology, Vol 2: Research designs: Quantitative, qualitative, neuropsychological, and biological2012. pp. 57–71.

Burn S, Propper C, Stoye G, Warner M, Aylin P, Bottle A. What happened to English NHS hospital activity during the COVID-19 pandemic? Brief Note IFS; 2021 13th May 2021.

NHS. COVID-19: Deploying our people safely. 2020 [updated 30th April 2020. https://www.england.nhs.uk/coronavirus/documents/COVID-19-deploying-our-people-safely/ .

Department of Health and Social Care. New TV advert urges public to stay at home to protect the NHS and save lives. [press release]. Department of Health and Social Care, 21st. January 2021 2021.

McCabe A, Wilson M, Macmillian R. Stronger than anyone thought: communities responding to COVID-19. Local Trust. Sheffieldn Hallam University. TSRC.; 2020.

McCabe A, Afridi A, Langdale E. Community responses to COVID-19: connecting communities? How relationships have mattered in community responses to COVID-19 Local Trust. TSRC, Sheffield Hallam University; 2022. January 2022.

Carpenter J. Exploring lessons from Covid-19 for the role of the voluntary sector in Integrated Care Systems. July 2021. Oxford Brookes University; 2021.

Greater Manchester Combined Authority. GM VCSE Accord Agreement. 2021 [ https://www.greatermanchester-ca.gov.uk/media/5207/gm-vcse-accord-2021-2026-final-signed-october-2021-for-publication.pdf .

Department for Digital, Culture, Media & Sport. Financial support for voluntary, community and social enterprise (VCSE) organisations to respond to coronavirus (COVID-19).: Department for Digital, Culture, Media & Sport and Office for Civil Society. 2020 [updated 20th may 2020. https://www.gov.uk/guidance/financial-support-for-voluntary-community-and-social-enterprise-vcse-organisations-to-respond-to-coronavirus-COVID-19 .

Smee C. Improving value for money in the United Kingdom National Health Service: performance measurement and improvement in a centralised system. Measuring Up: Improving Health Systems Performance in OECD Countries; 2002.

McCann L, Granter E, Hassard J, Hyde P. You can’t do both—something will give: limitations of the targets culture in managing UK health care workforces. Hum Resour Manag. 2015;54(5):773–91.

Smith P. Measuring value for money in healthcare: concepts and tools. London: Centre for Health Economics, University of York. The Health Foundation; 2009 September 2009.

Ekezie W, Awwad S, Krauchenberg A, Karara N, Dembiński Ł, Grossman Z, et al. Access to Vaccination among Disadvantaged, isolated and difficult-to-Reach communities in the WHO European Region: a systematic review. Vaccines. 2022;10(7):1038.

British Academy. Vaccine equity in Multicultural Urban settings: a comparative analysis of local government and community action, contextualised political economies and moral frameworks in Marseille and London. London: The British Academy; 2022.

England NHS. Core20PLUS5 (adults)– an approach to reducing healthcare inequalities 2023. https://www.england.nhs.uk/about/equality/equality-hub/national-healthcare-inequalities-improvement-programme/core20plus5/ .

NHS England. Building strong integrated care systems everywhere 2021. Available from here: https://www.england.nhs.uk/wp-content/uploads/2021/06/B0664-ics-clinical-and-care-professional-leadership.pdf .

Anfilogoff T, Marovitch J. Who Creates Health in Herts and West Essex? Presentation to NHS Confederation Seminar: Who Creates Health? 8 November 2022. 2022.

Bergen JWS. Pandemic pressures: how Greater Manchester equalities organisations have responded to the needs of older people during the covid-19 crisis. GMCVO; 2021.

Graham M. Learning from Covid-19 pandemic grant programmes lessons for funders and support agencies. May 2022. GMCVO; 2022.

Download references

Acknowledgements

The research team would like to thank ARC-GM PCIE team (Sue Wood, Aneela McAvoy, & Joanna Ferguson) and the Caribbean and African Health Network for their support in this study. We would also like to thank the Advisory Group members: Nasrine Akhtar, Basma Issa and Charles Kwaku-Odoi for their dedicated time, commitment, and valuable inputs into this research project and to partners who contributed to the early inception of this work, including members of the ARC-GM PCIE Panel & Forum & Nick Filer. We would also like to extend our thanks to the study participants for their participation in this research.

The project was funded by an internal University of Manchester grant and supported by the National Institute for Health and Care (NIHR) Applied Research Collaboration for Greater Manchester. Melissa Surgey’s doctoral fellowship is funded by the Applied Research Collaboration for Greater Manchester. The views expressed are those of the authors and not necessarily those of the NHS, the NIHR, or the Department of Health and Social Care.

Author information

Authors and affiliations.

Centre for Primary Care and Health Services Research, University of Manchester, Greater Manchester, England, UK

Stephanie Gillibrand

NIHR Applied Research Collaboration for Greater Manchester, Greater Manchester, England, UK

Ruth Watkinson, Melissa Surgey & Caroline Sanders

Independent (public contributor), Greater Manchester, England, UK

Greater Manchester Patient Safety Research Centre, University of Manchester, Greater Manchester, England, UK

Caroline Sanders

You can also search for this author in PubMed   Google Scholar

Contributions

SG, lead writer/editor, design of the work, RW, design of the work, drafting of article, review and revise suggestionsMS, draft of the article, review and revise suggestionsBI, design of the work, review and revise suggestionsCS, design of the work, draft of the article, review and revise suggestionsAll authors read and approved the final manuscript.

Corresponding author

Correspondence to Stephanie Gillibrand .

Ethics declarations

Ethics approval and consent to participate.

This study was approved by University of Manchester Ethics Committee (Proportionate UREC) 24/06/21. Ref 2021-11646-19665. Informed consent to participate in the research was taken from all research participants ahead of their participation in the study. Consent to participate in the study was taken from each participant by a member of the research team. All experiments were performed in accordance with relevant guidelines and regulations.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Supplementary material 2, supplementary material 3, supplementary material 4, supplementary material 5, supplementary material 6, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Gillibrand, S., Watkinson, R., Surgey, M. et al. “ We might not have been in hospital, but we were frontline workers in the community ”: a qualitative study exploring unmet need and local community-based responses for marginalised groups in Greater Manchester during the COVID-19 pandemic. BMC Health Serv Res 24 , 621 (2024). https://doi.org/10.1186/s12913-024-10921-4

Download citation

Received : 10 November 2023

Accepted : 28 March 2024

Published : 13 May 2024

DOI : https://doi.org/10.1186/s12913-024-10921-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Marginalised groups

BMC Health Services Research

ISSN: 1472-6963

number of participants in qualitative research

IMAGES

  1. Table of participants in qualitative research

    number of participants in qualitative research

  2. Understanding Qualitative Research: An In-Depth Study Guide

    number of participants in qualitative research

  3. How Many Participants for a UX Interview?

    number of participants in qualitative research

  4. Table of participants in qualitative research

    number of participants in qualitative research

  5. Number of participants per qualitative data collection technique

    number of participants in qualitative research

  6. -Participants in the qualitative research

    number of participants in qualitative research

VIDEO

  1. 🔴 How to select Participants: Quantitative Research

  2. Using SPSS Visualizing a Confidence Interval by Groups

  3. Analyzing Qualitative Data: Indepth Interviews and Focus Groups

  4. Quantitative or Qualitative: Number of complaint letters received by United States Postal Service

  5. Sampling and Recruiting Participants for Interview Research Projects by Kathryn Anderson-Levitt

  6. Flower Petal Math Fun #experientiallearning

COMMENTS

  1. Qualitative Research Part II: Participants, Analysis, and Quality Assurance

    Qualitative Research Part II: Participants, Analysis, and Quality Assurance. This is the second of a two-part series on qualitative research. Part 1 in the December 2011 issue of Journal of Graduate Medical Education provided an introduction to the topic and compared characteristics of quantitative and qualitative research, identified common ...

  2. Sample sizes for saturation in qualitative research: A systematic

    Qualitative samples that are larger than needed raise ethical issues, such as wasting research funds, overburdening study participants, and leading to wasted data (Carlsen and Glenton, 2011; Francis et al., 2010), while samples that are too small to reach saturation reduce the validity of study findings (Hennink et al., 2017). Our results thus ...

  3. Big enough? Sampling in qualitative inquiry

    Any senior researcher, or seasoned mentor, has a practiced response to the 'how many' question. Mine tends to start with a reminder about the different philosophical assumptions undergirding qualitative and quantitative research projects (Staller, 2013).As Abrams (2010) points out, this difference leads to "major differences in sampling goals and strategies."(p.537).

  4. PDF Determining the Sample in Qualitative Research

    straightforward guidelines for determining the number of participants in qualitative studies (Patton, 2015), rather several factors affect in deciding the samples. For instance, in her ... Determining the participants in qualitative research is problematic since various scholars have conceived it in their way. Deciding the participants remain ...

  5. Series: Practical guidance to qualitative research. Part 3: Sampling

    A sampling plan is a formal plan specifying a sampling method, a sample size, and procedure for recruiting participants (Box 1) . A qualitative sampling plan describes how many observations, interviews, focus-group discussions or cases are needed to ensure that the findings will contribute rich data. ... Qualitative research moves from ...

  6. Sample size: how many participants do I need in my research?

    It is the ability of the test to detect a difference in the sample, when it exists in the target population. Calculated as 1-Beta. The greater the power, the larger the required sample size will be. A value between 80%-90% is usually used. Relationship between non-exposed/exposed groups in the sample.

  7. Sample Size Policy for Qualitative Studies Using In-Depth Interviews

    The policy of the Archives of Sexual Behavior will be that it adheres to the recommendation that 25-30 participants is the minimum sample size required to reach saturation and redundancy in grounded theory studies that use in-depth interviews. This number is considered adequate for publications in journals because it (1) may allow for ...

  8. (PDF) How many participants are necessary for a qualitative study

    Abstract. One of the difficulties associated with qualitative research refers to sample size. Researchers often fail to present a justification for their N and are criticized for that. This ...

  9. Reporting and Justifying the Number of Interview Participants in

    For such qualitative research there is a paucity of discussion across the social sciences, the topic receiving far less attention than its centrality warrants. We analysed 798 articles published in 2003 and 2013 in ten top and second tier academic journals, identifying 248 studies using at least one type of qualitative interview.

  10. How Many Focus Groups Are Enough? Building an Evidence Base for

    Few empirical studies exist to guide researchers in determining the number of focus groups necessary for a research study. ... Doing funded qualitative research. In Handbook for Qualitative Research, eds. Denzin N. K., Lincoln Y. S., 401-20. Thousand Oaks, CA: Sage. ... Sampling and selecting participants in field research. In Handbook of ...

  11. Planning Qualitative Research: Design and Decision Making for New

    Qualitative research, conducted thoughtfully, is internally consistent, rigorous, and helps us answer important questions about people ... A number of research topics and questions indicate when using phenomenology as an appropriate approach. ... As with any human subjects research, issues of respect for participants are always paramount ...

  12. Qualitative Study

    Qualitative research gathers participants' experiences, perceptions, and behavior. It answers the hows and whys instead of how many or how much. It could be structured as a standalone study, purely relying on qualitative data, or part of mixed-methods research that combines qualitative and quantitative data. This review introduces the readers ...

  13. Real qualitative researchers do not count: The use of numbers in

    to qualitative research, as meaning depends, in part, on number. As in quantitative research, numbers are used in qualitative research to establish ... participants, interviewed only once, can yield 250 pages of raw data alone. Qualitative re-searchers can take advantage of the rhetorical

  14. Qualitative Research: Getting Started

    Qualitative research was historically employed in fields such as sociology, history, ... The number of participants is therefore dependent on the richness of the data, though Miles and Huberman 2 suggested that more than 15 cases can make analysis complicated and "unwieldy".

  15. Qualitative Research Part II: Participants, Analysis, and Quality

    This is the second of a two-part series on qualitative research. Part 1 in the December 2011 issue of Journal of Graduate Medical Education provided an introduction to the topic and compared characteristics of quantitative and qualitative research, identified common data collection approaches, and briefly described data analysis and quality assessment techniques. Part II describes in more ...

  16. How many participants do I need for qualitative research?

    The answer lies somewhere in between. It's often a good idea (for qualitative research methods like interviews and usability tests) to start with 5 participants and then scale up by a further 5 based on how complicated the subject matter is. You may also find it helpful to add additional participants if you're new to user research or you ...

  17. How many participants do I need in my qualitative research?

    It is all about reaching the point of saturation or the point where you are already getting repetitive responses (You may want to check Egon and Guba, 1985). Over time some researchers say that ...

  18. How to use and assess qualitative research methods

    In those niches where qualitative approaches have been able to evolve and grow, a new trend has seen the inclusion of patients and their representatives not only as study participants (i.e. "members", see above) but as consultants to and active participants in the broader research process [31-33].

  19. Characterising and justifying sample size sufficiency in interview

    Sample adequacy in qualitative inquiry pertains to the appropriateness of the sample composition and size.It is an important consideration in evaluations of the quality and trustworthiness of much qualitative research [] and is implicated - particularly for research that is situated within a post-positivist tradition and retains a degree of commitment to realist ontological premises - in ...

  20. Interviews and focus groups in qualitative research: an update for the

    Qualitative research is used increasingly in dentistry, due to its potential to provide meaningful, in-depth insights into participants' experiences, perspectives, beliefs and behaviours. These ...

  21. Successful Recruitment to Qualitative Research: A Critical Reflection

    Thomas et al. (2007) drew on a number of their research studies, concluding that gatekeepers who are supportive of a research endeavour can positively impact recruitment. ... Chronicling successful strategies for recruiting participants to qualitative research, and specifying participants' motivations to volunteer, make an important ...

  22. Qualitative Research: Definition, Methodology, Limitation, Examples

    Focus groups gather a small number of people to discuss and provide feedback on a particular subject. ... Because qualitative research is open-ended, participants have more control over the content of the data collected. So the marketer is not able to verify the results objectively against the scenarios stated by the respondents ...

  23. Identifying primary care clinicians' preferences for, barriers to, and

    The study was approved by the institutional ethics committee (NHG DSRB Reference Number: 2018/01355). All participants read the study information sheet before providing written consent. This study followed the Consolidated Criteria for Reporting Qualitative Research guidelines [see Additional file 1]. Participants and recruitment

  24. Characterising and justifying sample size sufficiency in interview

    Qualitative research provides a unique opportunity to understand a clinical problem from the patient's perspective. This study had a large diverse sample, recruited through a range of locations and used in-depth interviews which enhance the richness and generalizability of the results. ... they found that this larger number of participants ...

  25. Why do people participate in research interviews? Participant

    This article contributes to the growing research literature on research participants' agency and the positive gains of participating in qualitative research (Boss, 1987; Clark, 2010; Hutchinson et al., 1994; Hynson et al., 2006; Lohmeyer, 2020; Perera, 2020; Wolgemuth et al., 2015).Our interest in the topic arose partly from a general unease over the increasingly narrow understanding of ...

  26. "We might not have been in hospital, but we were frontline workers in

    A number of challenges associated with collaborative working were experienced by the VSCE sector and health system in delivering the vaccination programme in partnership with the VCFSE sector. ... we present findings from a co-designed qualitative research project, drawing on insights from 35 participants, including members of diverse ...