How many participants do I need for qualitative research?

  • Participant recruitment
  • Qualitative research

6 min read David Renwick

how many respondents should be in qualitative research

For those new to the qualitative research space, there’s one question that’s usually pretty tough to figure out, and that’s the question of how many participants to include in a study. Regardless of whether it’s research as part of the discovery phase for a new product, or perhaps an in-depth canvas of the users of an existing service, researchers can often find it difficult to agree on the numbers. So is there an easy answer? Let’s find out.

Here, we’ll look into the right number of participants for qualitative research studies. If you want to know about participants for quantitative research, read Nielsen Norman Group’s article .

Getting the numbers right

So you need to run a series of user interviews or usability tests and aren’t sure exactly how many people you should reach out to. It can be a tricky situation – especially for those without much experience. Do you test a small selection of 1 or 2 people to make the recruitment process easier? Or, do you go big and test with a series of 10 people over the course of a month? The answer lies somewhere in between.

It’s often a good idea (for qualitative research methods like interviews and usability tests) to start with 5 participants and then scale up by a further 5 based on how complicated the subject matter is. You may also find it helpful to add additional participants if you’re new to user research or you’re working in a new area.

What you’re actually looking for here is what’s known as saturation.

Understanding saturation

Whether it’s qualitative research as part of a master’s thesis or as research for a new online dating app, saturation is the best metric you can use to identify when you’ve hit the right number of participants.

In a nutshell, saturation is when you’ve reached the point where adding further participants doesn’t give you any further insights. It’s true that you may still pick up on the occasional interesting detail, but all of your big revelations and learnings have come and gone. A good measure is to sit down after each session with a participant and analyze the number of new insights you’ve noted down.

Interestingly, in a paper titled How Many Interviews Are Enough? , authors Greg Guest, Arwen Bunce and Laura Johnson noted that saturation usually occurs with around 12 participants in homogeneous groups (meaning people in the same role at an organization, for example). However, carrying out ethnographic research on a larger domain with a diverse set of participants will almost certainly require a larger sample.

Ensuring you’ve hit the right number of participants

How do you know when you’ve reached saturation point? You have to keep conducting interviews or usability tests until you’re no longer uncovering new insights or concepts.

While this may seem to run counter to the idea of just gathering as much data from as many people as possible, there’s a strong case for focusing on a smaller group of participants. In The logic of small samples in interview-based , authors Mira Crouch and Heather McKenzie note that using fewer than 20 participants during a qualitative research study will result in better data. Why? With a smaller group, it’s easier for you (the researcher) to build strong close relationships with your participants, which in turn leads to more natural conversations and better data.

There’s also a school of thought that you should interview 5 or so people per persona. For example, if you’re working in a company that has well-defined personas, you might want to use those as a basis for your study, and then you would interview 5 people based on each persona. This maybe worth considering or particularly important when you have a product that has very distinct user groups (e.g. students and staff, teachers and parents etc).

How your domain affects sample size

The scope of the topic you’re researching will change the amount of information you’ll need to gather before you’ve hit the saturation point. Your topic is also commonly referred to as the domain.

If you’re working in quite a confined domain, for example, a single screen of a mobile app or a very specific scenario, you’ll likely find interviews with 5 participants to be perfectly fine. Moving into more complicated domains, like the entire checkout process for an online shopping app, will push up your sample size.

As Mitchel Seaman notes : “Exploring a big issue like young peoples’ opinions about healthcare coverage, a broad emotional issue like postmarital sexuality, or a poorly-understood domain for your team like mobile device use in another country can drastically increase the number of interviews you’ll want to conduct.”

In-person or remote

Does the location of your participants change the number you need for qualitative user research? Well, not really – but there are other factors to consider.

  • Budget: If you choose to conduct remote interviews/usability tests, you’ll likely find you’ve got lower costs as you won’t need to travel to your participants or have them travel to you. This also affects…
  • Participant access: Remote qualitative research can be a lifesaver when it comes to participant access. No longer are you confined to the people you have physical access to — instead you can reach out to anyone you’d like.
  • Quality: On the other hand, remote research does have its downsides. For one, you’ll likely find you’re not able to build the same kinds of relationships over the internet or phone as those in person, which in turn means you never quite get the same level of insights.

Is there value in outsourcing recruitment?

Recruitment is understandably an intensive logistical exercise with many moving parts. If you’ve ever had to recruit people for a study before, you’ll understand the need for long lead times (to ensure you have enough participants for the project) and the countless long email chains as you discuss suitable times.

Outsourcing your participant recruitment is just one way to lighten the logistical load during your research. Instead of having to go out and look for participants, you have them essentially delivered to you in the right number and with the right attributes.

We’ve got one such service at Optimal Workshop, which means it’s the perfect accompaniment if you’re also using our platform of UX tools. Read more about that here .

So that’s really most of what there is to know about participant recruitment in a qualitative research context. As we said at the start, while it can appear quite tricky to figure out exactly how many people you need to recruit, it’s actually not all that difficult in reality.

Overall, the number of participants you need for your qualitative research can depend on your project among other factors. It’s important to keep saturation in mind, as well as the locale of participants. You also need to get the most you can out of what’s available to you. Remember: Some research is better than none!

Capture, analyze and visualize your qualitative data.

Try our qualitative research tool for usability testing, interviewing and note-taking. Reframer by Optimal Workshop.

how many respondents should be in qualitative research

Published on August 8, 2019

how many respondents should be in qualitative research

David Renwick

David is Optimal Workshop's Content Strategist and Editor of CRUX. You can usually find him alongside one of the office dogs 🐕 (Bella, Bowie, Frida, Tana or Steezy). Connect with him on LinkedIn.

Recommended for you

How to encourage people to participate in your study

How to encourage people to participate in your study

Encouraging people to take part in your study can be one of the hardest parts of the research process.

how many respondents should be in qualitative research

Tips for recruiting quality research participants

If there’s one universal truth in user research, it’s that at some point you’re going to need to find people to actually take part in your studies. Be it a large number of participants for quantitative research or a select number for in-depth, in-person user interviews. Finding the right people (and number) of people can... View Article

Taking better notes for better sensemaking

Taking better notes for better sensemaking

This post makes suggestions for practitioners as part of observing users during UX interviews.

Try Optimal Workshop tools for free

What are you looking for.

Explore all tags

Discover more from Optimal Workshop

Subscribe now to keep reading and get access to the full archive.

Type your email…

Continue reading

Qualitative study design: Sampling

  • Qualitative study design
  • Phenomenology
  • Grounded theory
  • Ethnography
  • Narrative inquiry
  • Action research
  • Case Studies
  • Field research
  • Focus groups
  • Observation
  • Surveys & questionnaires
  • Study Designs Home

As part of your research, you will need to identify "who" you need to recruit or work with to answer your research question/s. Often this population will be quite large (such as nurses or doctors across Victoria), or they may be difficult to access (such as people with mental health conditions). Sampling is a way that you can choose a smaller group of your population to research and then generalize the results of this across the larger population.

There are several ways that you can sample. Time, money, and difficulty or ease in reaching your target population will shape your sampling decisions. While there are no hard and fast rules around how many people you should involve in your research, some researchers estimate between 10 and 50 participants as being sufficient depending on your type of research and research question (Creswell & Creswell, 2018). Other study designs may require you to continue gathering data until you are no longer discovering new information ("theoretical saturation") or your data is sufficient to answer your question ("data saturation").

Why is it important to think about sampling?

It is important to match your sample as far as possible to the broader population that you wish to generalise to. The extent to which your findings can be applied to settings or people outside of who you have researched ("generalisability") can be influenced by your sample and sampling approach. For example, if you have interviewed homeless people in hospital with mental health conditions, you may not be able to generalise the results of this to every person in Australia with a mental health condition, or every person who is homeless, or every person who is in hospital. Your sampling approach will vary depending on what you are researching, but you might use a non-probability or probability (or randomised) approach.

Non-Probability sampling approaches

Non-Probability sampling is not randomised, meaning that some members of your population will have a higher chance of being included in your study than others. If you wanted to interview homeless people with mental health conditions in hospital and chose only homeless people with mental health conditions at your local hospital, this would be an example of convenience sampling; you have recruited participants who are close to hand. Other times, you may ask your participants if they can recommend other people who may be interested in the study: this is an example of snowball sampling. Lastly, you might want to ask Chief Executive Officers at rural hospitals how they support their staff mental health; this is an example of purposive sampling.

Examples of non-probability sampling include:

  • Purposive (judgemental)
  • Convenience

Probability (Randomised) sampling

Probability sampling methods are also called randomised sampling. They are generally preferred in research as this approach means that every person in a population has a chance of being selected for research. Truly randomised sampling is very complex; even a simple random sample requires the use of a random number generator to be used to select participants from a list of sampling frame of the accessible population. For example, if you were to do a probability sample of homeless people in hospital with a mental health condition, you would need to develop a table of all people matching this criteria; allocate each person a number; and then use a random number generator to find your sample pool. For this reason, while probability sampling is preferred, it may not be feasible to draw out a probability sample.

Things to remember:

  • Sampling involves selecting a small subsection of your population to generalise back to a larger population
  • Your sampling approach (probability or non-probability) will reflect how you will recruit your participants, and how generalisable your results are to the wider population
  • How many participants you include in your study will vary based on your research design, research question, and sampling approach

Further reading:

Babbie, E. (2008). The basics of social research (4th ed). Belmont: Thomson Wadsworth

Creswell, J.W. & Creswell, J.D. (2018). Research design: Qualitative, quantitative and mixed methods approaches (5th ed). Thousand Oaks: SAGE

Salkind, N.J. (2010) Encyclopedia of research design. Thousand Oaks: SAGE Publications

Vasileiou, K., Barnett, J., Thorpe, S., & Young, T. (2018). Characterising and justifying sample size sufficiency in interview-based studies: systematic analysis of qualitative health research over a 15-year period. BMC Medical Research Methodology, 18(148)

  • << Previous: Interviews
  • Next: Appraisal >>
  • Last Updated: Jun 13, 2024 10:34 AM
  • URL: https://deakin.libguides.com/qualitative-study-designs

InterQ Research

What’s in a Number? Understanding the Right Sample Size for Qualitative Research

  • May 3, 2019

By Julia Schaefer

Unlike quantitative research , numbers matter less when doing qualitative research.

It’s about quality, not quantity. So what’s in a number?

When thinking about sample size, it’s really important to ensure that you understand your target and have recruited the right people for the study. Whether your company is targeting moms from the Midwest with household incomes of $70k+, or teens who use Facebook for more than 8 hours a week, it’s crucial to understand the goals and objectives of the study and how the right target can help answer your essential research questions.

Determining the Right Sample Size For Qualitative Research Tip #1: Right Size for Qualitative Research

A high-quality panel includes much more than just members who are pulled from a general population. The right respondents for the study will have met all the criteria line-items identified from quantitative research studies and check the boxes that the client has identified through their own research. Only participants who match the audience specifications and background relevance expressed by the client should be actively recruited.

Determining the Right Sample Size For Qualitative Research Tip #2: No Two Studies are Alike

Choosing an appropriate study design is an important factor to consider when determining which sample size to use. There are various methods that can be used to gather insightful data, but not all methods may be applicable to your study and your project goal. In-depth interviews , focus groups , and ethnographic research are the most common methods used in qualitative market research. Each method can provide unique information and certain methods are more relevant than others. The types of questions being studied play an equally important role in deciding on a sample size.

Determining the Right Sample Size For Qualitative Research Tip #3:  Principle of Saturation and Diminishing Returns

Understanding the difference of which qualitative study to use is very important. Your study should have a large enough sample size to uncover a variety of opinions, and the sample size should be limited at the point of saturation.

Saturation occurs when adding more participants to the study does not result in obtaining additional perspectives or information. One can say there is a point of diminishing returns with larger samples, as it leads to more data but doesn’t necessarily lead to more information. A sample size should be large enough to sufficiently describe the phenomenon of interest, and address the research question at hand. However, a large sample size risks having repetitive and redundant data.

The objective of qualitative research is to reduce discovery failure, while quantitative research aims to reduce estimation error. As qualitative research works to obtain diverse opinions from a sample size on a client’s product/service/project, saturated data does benefit the project findings. As part of the analysis framework, one respondent’s opinion is enough to generate a code.

The Magic Number? Between 15-30

Based on research conducted on this issue, if you are building similar segments within the population, InterQ’s recommendation for in-depth interviews is to have a sample size of 15-30. In some cases, a minimum of 10 is sufficient, assuming there has been integrity in the recruiting process. With the goal to maintain a rigorous recruiting process, studies have noted having a sample size as little as 10 can be extremely fruitful, and still yield strong results.

Curious about qualitative research? Request a proposal today >

how many respondents should be in qualitative research

  • Request Proposal
  • Participate in Studies
  • Our Leadership Team
  • Our Approach
  • Mission, Vision and Core Values
  • Qualitative Research
  • Quantitative Research
  • Research Insights Workshops
  • Customer Journey Mapping
  • Millennial & Gen Z Market Research
  • Market Research Services
  • Our Clients
  • InterQ Blog

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Grad Med Educ
  • v.4(1); 2012 Mar

Qualitative Research Part II: Participants, Analysis, and Quality Assurance

This is the second of a two-part series on qualitative research. Part 1 in the December 2011 issue of Journal of Graduate Medical Education provided an introduction to the topic and compared characteristics of quantitative and qualitative research, identified common data collection approaches, and briefly described data analysis and quality assessment techniques. Part II describes in more detail specific techniques and methods used to select participants, analyze data, and ensure research quality and rigor.

If you are relatively new to qualitative research, some references you may find especially helpful are provided below. The two texts by Creswell 2008 and 2009 are clear and practical. 1 , 2 In 2008, the British Medical Journal offered a series of short essays on qualitative research; the references provided are easily read and digested. 3 – , 8 For those wishing to pursue qualitative research in more detail, a suggestion is to start with the appropriate chapters in Creswell 2008, 1 and then move to the other texts suggested. 9 – , 11

To summarize the previous editorial, while quantitative research focuses predominantly on the impact of an intervention and generally answers questions like “did it work?” and “what was the outcome?”, qualitative research focuses on understanding the intervention or phenomenon and exploring questions like “why was this effective or not?” and “how is this helpful for learning?” The intent of qualitative research is to contribute to understanding. Hence, the research procedures for selecting participants, analyzing data, and ensuring research rigor differ from those for quantitative research. The following sections address these approaches. table 1 provides a comparative summary of methodological approaches for quantitative and qualitative research.

A Comparison of Qualitative and Quantitative Methodological Approaches

An external file that holds a picture, illustration, etc.
Object name is i1949-8357-4-1-1-t01.jpg

Data collection methods most commonly used in qualitative research are individual or group interviews (including focus groups), observation, and document review. They can be used alone or in combination. While the following sections are written in the context of using interviews or focus groups to collect data, the principles described for sample selection, data analysis, and quality assurance are applicable across qualitative approaches.

Selecting Participants

Quantitative research requires standardization of procedures and random selection of participants to remove the potential influence of external variables and ensure generalizability of results. In contrast, subject selection in qualitative research is purposeful; participants are selected who can best inform the research questions and enhance understanding of the phenomenon under study. 1 , 8 Hence, one of the most important tasks in the study design phase is to identify appropriate participants. Decisions regarding selection are based on the research questions, theoretical perspectives, and evidence informing the study.

The subjects sampled must be able to inform important facets and perspectives related to the phenomenon being studied. For example, in a study looking at a professionalism intervention, representative participants could be considered by role (residents and faculty), perspective (those who approve/disapprove the intervention), experience level (junior and senior residents), and/or diversity (gender, ethnicity, other background).

The second consideration is sample size. Quantitative research requires statistical calculation of sample size a priori to ensure sufficient power to confirm that the outcome can indeed be attributed to the intervention. In qualitative research, however, the sample size is not generally predetermined. The number of participants depends upon the number required to inform fully all important elements of the phenomenon being studied. That is, the sample size is sufficient when additional interviews or focus groups do not result in identification of new concepts, an end point called data saturation . To determine when data saturation occurs, analysis ideally occurs concurrently with data collection in an iterative cycle. This allows the researcher to document the emergence of new themes and also to identify perspectives that may otherwise be overlooked. In the professionalism intervention example, as data are analyzed, the researchers may note that only positive experiences and views are being reported. At this time, a decision could be made to identify and recruit residents who perceived the experience as less positive.

Data Analysis

The purpose of qualitative analysis is to interpret the data and the resulting themes, to facilitate understanding of the phenomenon being studied. It is often confused with content analysis, which is conducted to identify and describe results. 12 In the professionalism intervention example, content analysis of responses might report that residents identified the positive elements of the innovation to be integration with real patient cases, opportunity to hear the views of others, and time to reflect on one's own professionalism. An interpretive analysis, on the other hand, would seek to understand these responses by asking questions such as, “Were there conditions that most frequently elicited these positive responses?” Further interpretive analysis might show that faculty engagement influenced the positive responses, with more positive features being described by residents who had faculty who openly reflected upon their own professionalism or who asked probing questions about the cases. This interpretation can lead to a deeper understanding of the results and to new ideas or theories about relationships and/or about how and why the innovation was or was not effective.

Interpretive analysis is generally seen as being conducted in 3 stages: deconstruction, interpretation, and reconstruction. 11 These stages occur after preparing the data for analysis, ie, after transcription of the interviews or focus groups and verification of the transcripts with the recording.

  • Deconstruction refers to breaking down data into component parts in order to see what is included. It is similar to content analysis mentioned above. It requires reading and rereading interview or focus group transcripts and then breaking down data into categories or codes that describe the content.
  • Interpretation follows deconstruction and refers to making sense of and understanding the coded data. It involves comparing data codes and categories within and across transcripts and across variables deemed important to the study (eg, year of residency, discipline, engagement of faculty). Techniques for interpreting data and findings include discussion and comparison of codes among research team members while purposefully looking for similarities and differences among themes, comparing findings with those of other studies, exploring theories which might explain relationships among themes, and exploring negative results (those that do not confirm the dominant themes) in more detail.
  • Reconstruction refers to recreating or repackaging the prominent codes and themes in a manner that shows the relationships and insights derived in the interpretation phase and that explains them more broadly in light of existing knowledge and theoretical perspectives. Generally one or two central concepts will emerge as central or overarching, and others will appear as subthemes that further contribute to the central concepts. Reconstruction requires contextualizing the findings, ie, positioning and framing them within existing theory, evidence, and practice.

Ensuring Research Quality and Rigor

Within qualitative research, two main strategies promote the rigor and quality of the research: ensuring the quality or “authenticity” of the data and the quality or “trustworthiness” of the analysis. 8 , 12 These are similar in many ways to ensuring validity and reliability, respectively, in quantitative research.

 1. Authenticity of the data refers to the quality of the data and data collection procedures. Elements to consider include:

  • Sampling approach and participant selection to enable the research question to be addressed appropriately (see “Selecting Participants” above) and reduce the potential of having a biased sample.

  •  Data triangulation refers to using multiple data sources to produce a more comprehensive view of the phenomenon being studied, eg, interviewing both residents and faculty and using multiple residency sites and/or disciplines.

  • Using the appropriate method to answer the research questions, considering the nature of the topic being explored, eg, individual interviews rather than focus groups are generally more appropriate for topics of a sensitive nature.

  • Using interview and other guides that are not biased or leading, ie, that do not ask questions in a way that may lead the participant to answer in a particular manner.

  • The researcher's and research team's relationships to the study setting and participants need to be explicit, eg, describe the potential for coercion when a faculty member requests his or her own residents to participate in a study.

  • The researcher's and team members' own biases and beliefs relative to the phenomenon under study must be made explicit, and, when necessary, appropriate steps must be taken to reduce their impact on the quality of data collected, eg, by selecting a neutral “third party” interviewer.

 2. Trustworthiness of the analysis refers to the quality of data analysis. Elements to consider when assessing the quality of analysis include:

  • Analysis process: is this clearly described, eg, the roles of the team members, what was done, timing, and sequencing? Is it clear how the data codes or categories were developed? Does the process reflect best practices, eg, comparison of findings within and among transcripts, and use of memos to record decision points?

  • Procedure for resolving differences in findings and among team members: this needs to be clearly described.

  • Process for addressing the potential influence the researchers' views and beliefs may have upon the analysis.

  • Use of a qualitative software program: if used, how was this used?

In summary, this editorial has addressed 3 components of conducting qualitative research: selecting participants, performing data analysis, and assuring research rigor and quality. See table 2 for the key elements for each of these topics.

Conducting Qualitative Research: Summary of Key Elements

An external file that holds a picture, illustration, etc.
Object name is i1949-8357-4-1-1-t02.jpg

JGME editors look forward to reading medical education papers employing qualitative methods and perspectives. We trust these two editorials may be helpful to potential authors and readers, and we welcome your comments on this subject.

Joan Sargeant, PhD, is Professor in the Division of Medical Education, Dalhousie University, Halifax, Nova Scotia, Canada.

Sample Size Policy for Qualitative Studies Using In-Depth Interviews

  • Published: 12 September 2012
  • Volume 41 , pages 1319–1320, ( 2012 )

Cite this article

how many respondents should be in qualitative research

  • Shari L. Dworkin 1  

299k Accesses

567 Citations

28 Altmetric

Explore all metrics

Avoid common mistakes on your manuscript.

In recent years, there has been an increase in submissions to the Journal that draw on qualitative research methods. This increase is welcome and indicates not only the interdisciplinarity embraced by the Journal (Zucker, 2002 ) but also its commitment to a wide array of methodologies.

For those who do select qualitative methods and use grounded theory and in-depth interviews in particular, there appear to be a lot of questions that authors have had recently about how to write a rigorous Method section. This topic will be addressed in a subsequent Editorial. At this time, however, the most common question we receive is: “How large does my sample size have to be?” and hence I would like to take this opportunity to answer this question by discussing relevant debates and then the policy of the Archives of Sexual Behavior . Footnote 1

The sample size used in qualitative research methods is often smaller than that used in quantitative research methods. This is because qualitative research methods are often concerned with garnering an in-depth understanding of a phenomenon or are focused on meaning (and heterogeneities in meaning )—which are often centered on the how and why of a particular issue, process, situation, subculture, scene or set of social interactions. In-depth interview work is not as concerned with making generalizations to a larger population of interest and does not tend to rely on hypothesis testing but rather is more inductive and emergent in its process. As such, the aim of grounded theory and in-depth interviews is to create “categories from the data and then to analyze relationships between categories” while attending to how the “lived experience” of research participants can be understood (Charmaz, 1990 , p. 1162).

There are several debates concerning what sample size is the right size for such endeavors. Most scholars argue that the concept of saturation is the most important factor to think about when mulling over sample size decisions in qualitative research (Mason, 2010 ). Saturation is defined by many as the point at which the data collection process no longer offers any new or relevant data. Another way to state this is that conceptual categories in a research project can be considered saturated “when gathering fresh data no longer sparks new theoretical insights, nor reveals new properties of your core theoretical categories” (Charmaz, 2006 , p. 113). Saturation depends on many factors and not all of them are under the researcher’s control. Some of these include: How homogenous or heterogeneous is the population being studied? What are the selection criteria? How much money is in the budget to carry out the study? Are there key stratifiers (e.g., conceptual, demographic) that are critical for an in-depth understanding of the topic being examined? What is the timeline that the researcher faces? How experienced is the researcher in being able to even determine when she or he has actually reached saturation (Charmaz, 2006 )? Is the author carrying out theoretical sampling and is, therefore, concerned with ensuring depth on relevant concepts and examining a range of concepts and characteristics that are deemed critical for emergent findings (Glaser & Strauss, 1967 ; Strauss & Corbin, 1994 , 2007 )?

While some experts in qualitative research avoid the topic of “how many” interviews “are enough,” there is indeed variability in what is suggested as a minimum. An extremely large number of articles, book chapters, and books recommend guidance and suggest anywhere from 5 to 50 participants as adequate. All of these pieces of work engage in nuanced debates when responding to the question of “how many” and frequently respond with a vague (and, actually, reasonable) “it depends.” Numerous factors are said to be important, including “the quality of data, the scope of the study, the nature of the topic, the amount of useful information obtained from each participant, the use of shadowed data, and the qualitative method and study designed used” (Morse, 2000 , p. 1). Others argue that the “how many” question can be the wrong question and that the rigor of the method “depends upon developing the range of relevant conceptual categories, saturating (filling, supporting, and providing repeated evidence for) those categories,” and fully explaining the data (Charmaz, 1990 ). Indeed, there have been countless conferences and conference sessions on these debates, reports written, and myriad publications are available as well (for a compilation of debates, see Baker & Edwards, 2012 ).

Taking all of these perspectives into account, the Archives of Sexual Behavior is putting forward a policy for authors in order to have more clarity on what is expected in terms of sample size for studies drawing on grounded theory and in-depth interviews. The policy of the Archives of Sexual Behavior will be that it adheres to the recommendation that 25–30 participants is the minimum sample size required to reach saturation and redundancy in grounded theory studies that use in-depth interviews. This number is considered adequate for publications in journals because it (1) may allow for thorough examination of the characteristics that address the research questions and to distinguish conceptual categories of interest, (2) maximizes the possibility that enough data have been collected to clarify relationships between conceptual categories and identify variation in processes, and (3) maximizes the chances that negative cases and hypothetical negative cases have been explored in the data (Charmaz, 2006 ; Morse, 1994 , 1995 ).

The Journal does not want to paradoxically and rigidly quantify sample size when the endeavor at hand is qualitative in nature and the debates on this matter are complex. However, we are providing this practical guidance. We want to ensure that more of our submissions have an adequate sample size so as to get closer to reaching the goal of saturation and redundancy across relevant characteristics and concepts. The current recommendation that is being put forward does not include any comment on other qualitative methodologies, such as content and textual analysis, participant observation, focus groups, case studies, clinical cases or mixed quantitative–qualitative methods. The current recommendation also does not apply to phenomenological studies or life history approaches. The current guidance is intended to offer one clear and consistent standard for research projects that use grounded theory and draw on in-depth interviews.

Editor’s note: Dr. Dworkin is an Associate Editor of the Journal and is responsible for qualitative submissions.

Baker, S. E., & Edwards, R. (2012). How many qualitative interviews is enough? National Center for Research Methods. Available at: http://eprints.ncrm.ac.uk/2273/ .

Charmaz, K. (1990). ‘Discovering’ chronic illness: Using grounded theory. Social Science and Medicine, 30 , 1161–1172.

Article   PubMed   Google Scholar  

Charmaz, K. (2006). Constructing grounded theory: A practical guide through qualitative analysis . London: Sage Publications.

Google Scholar  

Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research . Chicago: Aldine Publishing Co.

Mason, M. (2010). Sample size and saturation in PhD studies using qualitative interviews. Forum: Qualitative Social Research, 11 (3) [Article No. 8].

Morse, J. M. (1994). Designing funded qualitative research. In N. Denzin & Y. Lincoln (Eds.), Handbook of qualitative research (pp. 220–235). Thousand Oaks, CA: Sage Publications.

Morse, J. M. (1995). The significance of saturation. Qualitative Health Research, 5 , 147–149.

Article   Google Scholar  

Morse, J. M. (2000). Determining sample size. Qualitative Health Research, 10 , 3–5.

Strauss, A. L., & Corbin, J. M. (1994). Grounded theory methodology. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (pp. 273–285). Thousand Oaks, CA: Sage Publications.

Strauss, A. L., & Corbin, J. M. (2007). Basics of qualitative research: Techniques and procedures for developing grounded theory . Thousand Oaks, CA: Sage Publications.

Zucker, K. J. (2002). From the Editor’s desk: Receiving the torch in the era of sexology’s renaissance. Archives of Sexual Behavior, 31 , 1–6.

Download references

Author information

Authors and affiliations.

Department of Social and Behavioral Sciences, University of California at San Francisco, 3333 California St., LHTS #455, San Francisco, CA, 94118, USA

Shari L. Dworkin

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Shari L. Dworkin .

Rights and permissions

Reprints and permissions

About this article

Dworkin, S.L. Sample Size Policy for Qualitative Studies Using In-Depth Interviews. Arch Sex Behav 41 , 1319–1320 (2012). https://doi.org/10.1007/s10508-012-0016-6

Download citation

Published : 12 September 2012

Issue Date : December 2012

DOI : https://doi.org/10.1007/s10508-012-0016-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Find a journal
  • Publish with us
  • Track your research

How Many Participants Do I Need? A Guide to Sample Estimation

how many respondents should be in qualitative research

As a qualitative mentor, a question I am frequently asked is, “How do I select the right sample size?”. There is no short answer or hard and fast rule for this. As with all things, there is nuance here, and much of this depends on other factors of your study. The purpose of this blog post is to provide you with strategies to select the appropriate sample for your qualitative study.

When students ask me what their sample needs to be, my first response is to look at the literature. In your literature review , you reviewed studies with methodologies similar to yours. Look at what others who have conducted studies like your own have used as samples. Start to model yours on these estimates. This will provide you with a good start to estimate your sample size.

The key to finding the right number of participants to recruit is to estimate the point at which you will reach data saturation, or when you are not gleaning new information as you add participants. Practically speaking, this means that you are not creating new codes or modifying your codebook anymore. Guest et al. (2006) found that in homogeneous studies using purposeful sampling, like many qualitative studies, 12 interviews should be sufficient to achieve data saturation.

However, there are qualifications here, including specifics of a data set as well as a researcher’s experience or tendency to lump or split categories. More recently, Hagaman and Wutich (2017) explored how many interviews were needed to identify metathemes, or those overarching themes, in qualitative research. In contrast to Guest’s (2006) work, and in a very different study, Hagaman and Wutich (2017) found that a larger sample of between 20-40 interviews was necessary to detect those metathemes.

You can see that between these two articles, there is variation in sample size and the number of participants necessary to reach data saturation. These should, however, provide some guidance and a starting point for thinking about your own sample. At a minimum, you probably want to begin with a sample of 12-15 participants. Plan to add more as needed if you do not believe you have reached saturation in that amount.

Guest, G., Bunce, A., & Johnson, L. (2006). How many interviews are enough?: An experiment with data saturation and variability. Field Methods , 18(1), 59.82. https://doi.org/10.1177/1525822X05279903

Hagaman, A. K., & Wutich, A. (2017). How many interviews are enough to identify metathemes in multisited and cross-cultural research? Another perspective on Guest, Bunce, and Johnson’s (2006) landmark study. Field Methods , 29(1), 23-41. https://doi.org/10.1177/1525822X16640447

request a consultation

We work with graduate students every day and know what it takes to get your research approved.

  • Address committee feedback
  • Roadmap to completion
  • Understand your needs and timeframe

Find the right market research agencies, suppliers, platforms, and facilities by exploring the services and solutions that best match your needs

list of top MR Specialties

Browse all specialties

Browse Companies and Platforms

by Specialty

by Location

Browse Focus Group Facilities

how many respondents should be in qualitative research

Manage your listing

Follow a step-by-step guide with online chat support to create or manage your listing.

About Greenbook Directory

IIEX Conferences

Discover the future of insights at the Insight Innovation Exchange (IIEX) event closest to you

IIEX Virtual Events

Explore important trends, best practices, and innovative use cases without leaving your desk

Insights Tech Showcase

See the latest research tech in action during curated interactive demos from top vendors

Stay updated on what’s new in insights and learn about solutions to the challenges you face

Greenbook Future list

An esteemed awards program that supports and encourages the voices of emerging leaders in the insight community.

Insight Innovation Competition

Submit your innovation that could impact the insights and market research industry for the better.

Find your next position in the world's largest database of market research and data analytics jobs.

how many respondents should be in qualitative research

For Suppliers

Directory: Renew your listing

Directory: Create a listing

Event sponsorship

Get Recommended Program

Digital Ads

Content marketing

Ads in Reports

Podcasts sponsorship

Run your Webinar

Host a Tech Showcase

Future List Partnership

All services

how many respondents should be in qualitative research

Dana Stanley

Greenbook’s Chief Revenue Officer

What is the ideal Sample Size in Qualitative Research?

Presented by InterQ Research LLC

If we were to assemble a list of “most asked questions” that we receive from new clients, it’s this:

What is the ideal sample size in qualitative research? It’s a great question. A fantastic one. Because panel size does matter, though perhaps not as much as it does in quantitative research, when we’re aiming for a statistically meaningful number. Let’s explore this whole issue of panel size and what you should be looking for from participant panels when conducing qualitative research.

First off, look at quality versus quantity

Most likely, your company is looking for market research on a very specific audience type. B2B decision makers in human resources. Moms who live in the Midwest and have household incomes of $70k +. Teens who use Facebook more than 8 hours a week. Specificity is great thing, and without fail, every client we work with has a good grasp on their audience type. In qualitative panels, therefore, our first objective is to ensure that we’re recruiting people who meet each and every criteria line-item that we identify through quantitative research  – and the criteria that our clients have pinpointed through their own research. Panel quality – having the right members in the panel – is so much more important than just pulling from a general population that falls within broad parameters. So first and foremost, we focus on recruiting the right respondents who match our audience specifications.

Study design in qualitative research

The type of qualitative study chosen is also one of the most important factors to consider when choosing sample size. In-depth interviews, focus groups, and ethnographic research are the most common methods used in qualitative market research, and the types of questions being studied have an equally important factor as the sample size chosen for these various methods. One of the most important principles to keep in mind – in all of these study designs – is the principle of saturation .

The objective of qualitative research (as compared to quantitative research) is to lessen discovery failure; in quantitative research, the objective is to reduce estimation error. Here’s where the principle of saturation comes in: With saturation, we say that the collection of new data isn’t giving the researcher any new additional insights into the issue being investigated. Qualitative seeks to uncover diverse opinions from the sample size, and one person’s opinion is enough to generate a code (part of the analysis framework). There is a point of diminishing return with larger samples; more data does not necessarily lead to more information – it simply leads to the same information being repeated (saturation). The goal, therefore, is to have a large enough sample size in a qualitative study that we’re able to uncover a range of opinions, but to cut the sample size off at the number where we’re getting saturation and repetitive data.

So … is there a magical number to aim for in qualitative research?

So now we’re back to our original question:

What is the ideal sample size in qualitative research?

We’ll answer it this time. Based on studies that have been done in academia  on this very issue, 30 seems to be an ideal sample size for the most comprehensive view, but studies can have as little as 10 total participants and still yield extremely fruitful, and applicable, results. (This goes back to excellence in recruiting.)

Our general recommendation for in-depth interviews is a sample size of 30, if we’re building a study that includes similar segments within the population. A minimum size can be 10 – but again, this assumes the population integrity in recruiting.

Presented by

InterQ Research LLC

San Francisco, California

SOCIAL LINKS

Save to my lists

Featured expert

InterQ Research LLC

Full Service

Qualitative Research

Quantitative Research

Headquartered in Silicon Valley, InterQ delivers innovative market research for the tech industry, including qualitative, quantitative, and UX.

Why choose InterQ Research LLC

how many respondents should be in qualitative research

Tech industry specialist

B2B complex recruiting

Innovative methodologies

Big brand experience

Proven results

Learn more about InterQ Research LLC

Sign Up for Updates

Get content that matters, written by top insights industry experts, delivered right to your inbox.

how many respondents should be in qualitative research

67k+ subscribers

Weekly Newsletter

Greenbook Podcast

Event Updates

I agree to receive emails with insights-related content from Greenbook. I understand that I can manage my email preferences or unsubscribe at any time and that Greenbook protects my privacy under the General Data Protection Regulation.*

Get the latest updates from top market research, insights, and analytics experts delivered weekly to your inbox

Your guide for all things market research and consumer insights

Create a New Listing

Manage My Listing

Find Companies

Find Focus Group Facilities

Tech Showcases

GRIT Report

Expert Channels

Get in touch

Marketing Services

Future List

Publish With Us

Privacy policy

Cookie policy

Terms of use

Copyright © 2024 New York AMA Communication Services, Inc. All rights reserved. 234 5th Avenue, 2nd Floor, New York, NY 10001 | Phone: (212) 849-2752

Qualitative Researcher Dr Kriukow

Articles and blog posts

How to choose the right sample size for a qualitative study… and convince your supervisor that you know what you’re doing.

how many respondents should be in qualitative research

The question of how many participants are enough for a qualitative interview is, in my opinion, one of the most difficult questions to find an answer to in the literature. In fact, many authors who set out to find specific guidelines on the ideal sample size in qualitative research in the literature have also concluded that these are “virtually non-existent” (Guest, Bunce and Johnson, 2005: 59).   This is particularly unfortunate, given that as a student planning to undertake your research, one of the things that will be most likely to be asked of you is to indicate, and justify, the number of participants in your planned study (this also includes your PhD proposal in which you are expected to give as much detail of the study as possible).

If you, then, turn to the literature, hoping to find advice from some of the great minds in research methodology, you are likely to find them evading the question and often hiding behind the term “saturation” which refers to the point at which gathering new data does not provide any new theoretical insights into the studied phenomenon. Although the concept of saturation may also be controversial, not least because the longer you explore, analyse and reflect on your data, you are always likely to find something “new” in it, it has come to be the guiding concept in establishing sample size in many qualitative studies. As Guest, Bunce and Johnson (2005) rightly point out, however

“although the idea of saturation is helpful at the conceptual level, it provides little practical guidance for estimating sample sizes for robust research prior to data collection”

     (Guest, Bunce and Johnson, 2005: 59)

In other words – how in the world are we supposed to know when we will reach saturation PRIOR TO THE STUDY???

My advice is to use the available literature on the point of saturation and use it to justify your decision regarding the sample size. I did it for my PhD study, as I was growing frustrated that I really have to justify my decision to include 20 participants for an interview, even though I had read dozens of reports in which this number, or smaller, was common (“are you going to interview 20 participants just because others did?”). I just felt that this would be enough, and my common sense, which as I learnt throughout my PhD was the last thing that anyone would care about, was telling me the same thing. In order to support my decision with the literature, however, and considering that there are hardly any guidelines for establishing sample size , I decided to try to reach some sort of conclusion as to how many participants are enough to reach saturation and use it as my main argument for establishing the size of the sample.

So what does the literature tell us about this? Just as there is not single answer as to what sample size is sufficient, there is no single answer to the question of what sample size is sufficient to reach theoretical saturation .  Such factors as heterogeneity of the studied population, the scope of the study and the adopted methods and their application (e.g. the length of the interviews) are believed, however, to have a central role in achieving this (cf. Baker and Edwards, 2012; Guest, Bunce and Johnson, 2005; Mason, 2010). Mason’s (2010) analysis of 560 PhD studies that adopted a qualitative interview as their main method revealed that the most common sample size in qualitative research is between 15 and 50 participants, with 20 being the average sample size in grounded theory studies (which was also the type of study I was undertaking). Guest, Bunce and Johnson (2005) used data from their own study to conclude that 88% of the codes they developed when analysing the data from 60 qualitative interviews were created by the time 12 interviews had been conducted.

These findings helped me in arguing that my initial sample size was going to be 20. “Given the detailed design of the study, which includes triangulation of the data and methods”, I argued, “I believe that this number will enable me to make valid judgements about the general trends emerging in the data”. I also stated that I am planning to recruit more participants, should the saturation not occur.

I hope that this article will help you in your quest to determine the sample size for your study and give you an idea of how you can go about arguing that it is a well thought-through decision. Do remember, however, that 20 participants may be enough for one study and not enough, or too many, for another. The point of this article was not to argue that 20 participants is a universally right number for a qualitative study, but rather to point to the fact that there is no such universally right number and that you are not the only one struggling to find guidelines regarding the interview sample size, as well as to put forward the concept of saturation as one of possible principles that may guide you in deciding how many participants to recruit for your study.

If you have any questions regarding this topic, comment below or send me a message through my Facebook page .

  • UPDATE – see my Facebook page for my response to the question about the relevance of “saturation” for Phenomenological research

References:

Baker, S. & Edwards, R. (eds., 2012). How many qualitative interviews is enough? Expert voices and early career reflections on sampling and cases in qualitative research. National Centre for Research Methods , 1-42.

Guest, G., Bunce, A. & Johnson, L. (2005). How many interviews are enough? An experiment with data saturation and variability. Field Methods, 18 (1), 59-82.

Mason, M. (2010). Sample Size and Saturation in PhD Studies Using Qualitative Interviews. Forum Qualitative Sozialforschung / Forum: Qualitative Social Research, 11 (3).

Jarek Kriukow

Sample size for qualitative research

How large should the sample size be in a qualitative study? This article discusses the importance of sample size in qualitative research.

The risk of missing something important

Editor’s note: Peter DePaulo is an independent marketing research consultant and focus group moderator doing business as DePaulo Research Consulting, Montgomeryville, Pa.

In a qualitative research project, how large should the sample be? How many focus group respondents, individual depth interviews (IDIs), or ethnographic observations are needed?

We do have some informal rules of thumb. For example, Maria Krieger (in her white paper, “The Single Group Caveat,” Brain Tree Research & Consulting, 1991) advises that separate focus groups are needed for major segments such as men, women, and age groups, and that two or more groups are needed per segment because any one group may be idiosyncratic. Another guideline is to continue doing groups or IDIs until we seem to have reached a saturation point and are no longer hearing anything new.

Such rules are intuitive and reasonable, but they are not solidly grounded and do not really tell us what an optimal qualitative sample size may be. The approach proposed here gives specific answers based on a firm foundation.

First, the importance of sample size in qualitative research must be understood.

Size does matter, even for a qualitative sample

One might suppose that “N” (the number in the sample) simply is not very important in a qualitative project. After all, the effect of increasing N, as we learned in statistics class, is to reduce the sampling error (e.g., the +/- 3 percent variation in opinion polls with N = 1,000) in a quantitative estimate. Qualitative research normally is inappropriate for estimating quantities. So, we lack the old familiar reason for increasing sample size.

Nevertheless, in qualitative work, we do try to discover something. We may be seeking to uncover: the reasons why consumers may or may not be satisfied with a product; the product attributes that may be important to users; possible consumer perceptions of celebrity spokespersons; the various problems that consumers may experience with our brand; or other kinds of insights. (For lack of a better term, I will use the word “perception” to refer to a reason, need, attribute, problem, or whatever the qualitative project is intended to uncover.) It would be up to a subsequent quantitative study to estimate, with statistical precision, how important or prevalent each perception actually is.

The key point is this: Our qualitative sample must be big enough to assure that we are likely to hear most or all of the perceptions that might be important. Within a target market, different customers may have diverse perceptions. Therefore, the smaller the sample size, the narrower the range of perceptions we may hear. On the positive side, the larger the sample size, the less likely it is that we would fail to discover a perception that we would have wanted to know. In other words, our objective in designing qualitative research is to reduce the chances of discovery failure, as opposed to reducing (quantitative) estimation error.

Discovery failure can be serious

What might go wrong if a qualitative project fails to uncover an actionable perception (or attribute, opinion, need, experience, etc.)? Here are some possibilities:

  • A source of dissatisfaction is not discovered - and not corrected. In highly competitive industries, even a small incidence of dissatisfaction could dent the bottom line.
  • In the qualitative testing of an advertisement, a copy point that offends a small but vocal subgroup of the market is not discovered until a public-relations fiasco erupts.
  • When qualitative procedures are used to pre-test a quantitative questionnaire, an undiscovered ambiguity in the wording of a question may mean that some of the subsequent quantitative respondents give invalid responses. Thus, qualitative discovery failure eventually can result in quantitative estimation error due to respondent miscomprehension.

Therefore, size does matter in a qualitative sample, though for a different reason that in a quant sample. The following example shows how the risk of discover failure may be easy to overlook even when it is formidable.

Example of the risk being higher than expected

The managers of a medical clinic (name withheld) had heard favorable anecdotal feedback about the clinic’s quality, but wanted an independent evaluation through research. The budget permitted only one focus group with 10 clinic patients. All 10 respondents clearly were satisfied with the clinic, and group discussion did not reverse these views.

Did we miss anything as a result of interviewing only 10? Suppose, for example that the clinic had a moody staff member who, unbeknownst to management, was aggravating one in 10 clinic patients. Also, suppose that management would have wanted to discover anything that affects the satisfaction at least 10 percent of customers. If there really was an unknown satisfaction problem with a 10 percent incidence, then what was the chance that our sample of 10 happened to miss it? That is, what is the probability that no member of the subgroup defined as those who experienced the staffer in a bad mood happened to get into the sample?

At first thought, the answer might seem to be “not much” chance of missing the problem. The hypothetical incidence is “one in 10,” and we did indeed interview 10 patients. Actually, the probability that our sample failed to include a patient aggravated by the moody staffer turns out to be just over one in three (0.349 to be exact). This probability is simple to calculate: Consider that the chance of any one customer selected at random not being a member of the 10 percent (aggravated) subgroup is 0.9 (i.e., a nine in 10 chance). Next, consider that the chance of failing to reach anyone from the 10 percent subgroup twice in a row (by selecting two customers at random) is 0.9 X 0.9, or 0.9 to the second power, which equals 0.81. Now, it should be clear that the chance of missing the subgroup 10 times in a row (i.e., when drawing a sample of 10) is 0.9 to the tenth power, which is 0.35. Thus, there is a 35 percent chance that our sample of 10 would have “missed” patients who experienced the staffer in a bad mood. Put another way, just over one in three random samples of 10 will miss an experience or characteristic with an incidence of 10 percent.

This seems counter-intuitively high, even to quant researchers to whom I have shown this analysis. Perhaps people implicitly assume the fallacy that if something has an overall frequency of one in N, then it is almost sure to appear in N chances.

Basing the decision on calculated probabilities

So, how can we figure the sample size needed to reduce the risk as much as we want? I am proposing two ways. One would be based on calculated probabilities like those in the table above, which was created by repeating the power calculations described above for various incidences and sample sizes. The client and researcher would peruse the table and select a sample size that is affordable yet reduces the risk of discover failure to a tolerable level.

For example, if the research team would want to discover a perception with an incidence as low as 10 percent of the population, and if the team wanted to reduce the risk of missing that subgroup to less than 5 percent, then a sample of N=30 would suffice, assuming random selection. (To be exact, the risk shown in the table is .042, or 4.2 percent.) This is analogous to having 95 percent confidence in being able to discover a perception with a 10 percent incidence. Remember, however, that we are expressing the confidence in uncovering a qualitative insight - as opposed to the usual quantitative notion of “confidence” in estimating a proportion or mean plus or minus the measurement error.

If the team wants to be more conservative and reduce the risk of missing the one-in-10 subgroup to less than 1 percent (i.e., 99 percent confidence), then a sample of nearly 50 would be needed. This would reduce the risk to nearly 0.005 (see table).

What about non-randomness?

Of course, the table assumes random sampling, and qualitative samples often are not randomly drawn. Typically, focus groups are recruited from facility databases, which are not guaranteed to be strictly representative of the local adult population, and factors such as refusals (also a problem in quantitative surveys, by the way) further compromise the randomness of the sample.

Unfortunately, nothing can be done about subgroups that are impossible to reach, such as people who, for whatever reason, never cooperate when recruiters call. Nevertheless, we can still sample those subgroups who are less likely to be reached as long as the recruiter’s call has some chance of being received favorably, for example, people who are home only half as often as the average target customer but will still answer the call and accept our invitation to participate. We can compensate for their reduced likelihood of being contacted by thinking of their reachable incidence as half of their actual incidence. Specifically, if we wanted to allocate enough budget to reach a 10 percent subgroup even if it is twice as hard to reach, then we would suppose that their reachable incidence is as low as 5 percent, and look at the 5 percent row in the table. If, for instance, we wanted to be very conservative, we would recruit 100 respondents, resulting in less than a 1 percent chance - .006, to be exact - of missing a 5 percent subgroup (or a 10 percent subgroup that behaves like a 5 percent subgroup in likelihood of being reached).

An approach based on actual qualitative findings

The other way of figuring an appropriate sample size would be to consider the findings of a pair of actual qualitative studies reported by Abbie Griffin and John Hauser in an article, “The Voice of the Customer” (Marketing Science, Winter 1993). These researchers looked at the number of customer needs uncovered by various numbers of focus groups and in-depth interviews.

In one of the two studies, two-hour focus groups and one-hour in-depth interviews (IDIs) were conducted with users of a complex piece of office equipment. In the other study, IDIs were conducted with consumers of coolers, knapsacks, and other portable means of storing food. Both studies looked at the number of needs (attributes, broadly defined) uncovered for each product category. Using mathematical extrapolations, the authors hypothesized that 20-30 IDIs are needed to uncover 90-95 percent of all customer needs for the product categories studied.

As with typical learning curves, there were diminishing returns in the sense that fewer new (non-duplicate) needs were uncovered with each additional IDI. It seemed that few additional needs would be uncovered after 30 IDIs. This is consistent with the probability table (shown earlier), which shows that perceptions of all but the smallest market segments are likely to be found in samples of 30 or less.

In the office equipment study, one two-hour focus group was no better than two one-hour IDIs, implying that “group synergies [did] not seem to be present” in the focus groups. The study also suggested that multiple analysts are needed to uncover the broadest range of needs.

These studies were conducted within the context of quality function deployment, where, according to the authors, 200-400 “customer needs” are usually identified. It is not clear how the results might generalize to other qualitative applications.

Nevertheless, if one were to base a sample-size decision on the Griffin and Hauser results, the implication would be to conduct 20-30 IDIs and to arrange for multiple analysts to look for insights in the data. Perhaps backroom observers could, to some extent, serve as additional analysts by taking notes while watching the groups or interviews. The observers’ notes might contain some insights that the moderator overlooks, thus helping to minimize the chances of missing something important.

N=30 as a starting point for planning

Neither the calculation of probabilities in the prior table nor the empirical rationale of Griffin and Hauser is assured of being the last word on qualitative sample size. There might be other ways of figuring the number of IDIs, groups, or ethnographic observations needed to avoid missing something important.

Until the definitive answer is provided, perhaps an N of 30 respondents is a reasonable starting point for deciding the qualitative sample size that can reveal the full range (or nearly the full range) of potentially important customer perceptions. An N of 30 reduces the probability of missing a perception with a 10 percent-incidence to less than 5 percent (assuming random sampling), and it is the upper end of the range found by Griffin and Hauser. If the budget is limited, we might reduce the N below 30, but the client must understand the increased risks of missing perceptions that may be worth knowing. If the stakes and budget are high enough, we might go with a larger sample in order to ensure that smaller (or harder to reach) subgroups are still likely to be represented.

If focus groups are desired, and we want to count each respondent separately toward the N we choose (e.g., getting an N of 30 from three groups with 10 respondents in each), then it is important for every respondent to have sufficient air time on the key issues. Using mini groups instead of traditional-size groups could help achieve this objective. Also, it is critical for the moderator to control dominators and bring out the shy people, lest the distinctive perceptions of less-talkative customers are missed.

Across segments or within each one?

A complication arises when we are separately exploring different customer segments, such as men versus women, different age groups, or consumers in different geographic regions. In the case of gender and a desired N of 30, for example, do we need 30 in total (15 males plus 15 females) or do we really need to interview 60 people (30 males plus 30 females)? This is a judgment call, which would depend on the researchers’ belief in the extent to which customer perceptions may vary from segment to segment. Of course, it may also depend on budget. To play it safe, each segment should have its own N large enough so that appreciable subgroups within the segment are likely to be represented in the sample.

What if we only want the “typical” or “majority” view?

For some purportedly qualitative studies, the stated or implied purpose may be to get a sense of how customers feel overall about the issue under study. For example, the client may want to know whether customers “generally” respond favorably to a new concept. In that case, it might be argued that we need not be concerned about having a sample large enough to make certain that we discover minority viewpoints, because the client is interested only in how “most” customers react.

The problem with this agenda is that the “qualitative” research would have an implicit quantitative purpose: to reveal the attribute or point of view held by more than 50 percent of the population. If, indeed, we observe what “most” qualitative respondents say or do and then infer that we have found the majority reaction, we are doing more than “discovering” that reaction: We are implicitly estimating its incidence at more than 50 percent.

The approach I propose makes no such inferences. If we find that only one respondent in a sample of 30 holds a particular view, we make no assumption that it represents a 10 percent population incidence, although, as discussed later, it might be that high. The actual population incidence is likely to be closer to 3.3 percent (1/30) than to 10 percent. Moreover, to keep the study qualitative, we should not say that we have estimated the incidence at all. We only want to ensure that if there is an attribute or opinion with an incidence as low as 10 percent, we are likely to have at least one respondent to speak for it - and a sample of 30 will probably do the job.

If we do want to draw quantitative inferences from a qualitative procedure (and, normally, this is ill advised), then this paper does not apply. Instead, the researchers should use the usual calculations for setting a quantitative sample size at which the estimation error resulting from random sampling variations would be acceptably low.

Keeping qualitative pure

Whenever I present this sample-size proposal, someone usually objects that I am somehow “quantifying qualitative.” On the contrary, estimating the chances of missing a potentially important perception is completely different from estimating the percent of a target population who hold a particular perception. To put it another way, calculating the odds of missing a perception with a hypothetical incidence does not quantify the incidences of those perceptions that we actually do uncover.

Therefore, qualitative consultants should not be reluctant to talk about the probability of missing something important. In so doing, they will not lose their identity as qualitative researchers, nor will they need any “high math.” Moreover, by distinguishing between discovery failure and estimation error, researchers can help their clients fully understand the difference between qualitative and quantitative purposes. In short, the approach I propose is intended to ensure that qualitative will accomplish what it does best - to discover (not measure) potentially important insights.

Qualitative research with children: Five strategies to gain parental trust Related Categories: Recruiting-Qualitative, Qualitative Research, Focus Groups Recruiting-Qualitative, Qualitative Research, Focus Groups, Children, Parents, Research Industry

From the Publisher November 1987 Related Categories: Recruiting-Qualitative, Qualitative Research, Focus Groups Recruiting-Qualitative, Qualitative Research, Focus Groups, Focus Group-Moderating, Incentive Payment & Processing, Pharmaceutical Products

24 Top Qualitative Research Companies 2021 Related Categories: Recruiting-Qualitative, Qualitative Research, Focus Groups Recruiting-Qualitative, Qualitative Research, Focus Groups, Consumer Research, Consumers, Ethnographic Research, Focus Group-Facilities, Focus Group-Moderating, Focus Group-Online, Focus Group-Transcriptions, One-on-One (Depth) Interviews, Qualitative-Online, Software-Online Qualitative, Software-Qualitative, Transcription Services

How to get the story behind the numbers in a fast, cost-effective way Related Categories: Recruiting-Qualitative, Qualitative Research, Focus Groups Recruiting-Qualitative, Qualitative Research, Focus Groups, Focus Group-Moderating, Focus Group-Online, One-on-One (Depth) Interviews, Qualitative-Online, Software-Online Qualitative, Software-Qualitative, Consumer Research, Consumers, Quantitative Research, Video Recording

  • Research article
  • Open access
  • Published: 21 November 2018

Characterising and justifying sample size sufficiency in interview-based studies: systematic analysis of qualitative health research over a 15-year period

  • Konstantina Vasileiou   ORCID: orcid.org/0000-0001-5047-3920 1 ,
  • Julie Barnett 1 ,
  • Susan Thorpe 2 &
  • Terry Young 3  

BMC Medical Research Methodology volume  18 , Article number:  148 ( 2018 ) Cite this article

740k Accesses

1189 Citations

172 Altmetric

Metrics details

Choosing a suitable sample size in qualitative research is an area of conceptual debate and practical uncertainty. That sample size principles, guidelines and tools have been developed to enable researchers to set, and justify the acceptability of, their sample size is an indication that the issue constitutes an important marker of the quality of qualitative research. Nevertheless, research shows that sample size sufficiency reporting is often poor, if not absent, across a range of disciplinary fields.

A systematic analysis of single-interview-per-participant designs within three health-related journals from the disciplines of psychology, sociology and medicine, over a 15-year period, was conducted to examine whether and how sample sizes were justified and how sample size was characterised and discussed by authors. Data pertinent to sample size were extracted and analysed using qualitative and quantitative analytic techniques.

Our findings demonstrate that provision of sample size justifications in qualitative health research is limited; is not contingent on the number of interviews; and relates to the journal of publication. Defence of sample size was most frequently supported across all three journals with reference to the principle of saturation and to pragmatic considerations. Qualitative sample sizes were predominantly – and often without justification – characterised as insufficient (i.e., ‘small’) and discussed in the context of study limitations. Sample size insufficiency was seen to threaten the validity and generalizability of studies’ results, with the latter being frequently conceived in nomothetic terms.

Conclusions

We recommend, firstly, that qualitative health researchers be more transparent about evaluations of their sample size sufficiency, situating these within broader and more encompassing assessments of data adequacy . Secondly, we invite researchers critically to consider how saturation parameters found in prior methodological studies and sample size community norms might best inform, and apply to, their own project and encourage that data adequacy is best appraised with reference to features that are intrinsic to the study at hand. Finally, those reviewing papers have a vital role in supporting and encouraging transparent study-specific reporting.

Peer Review reports

Sample adequacy in qualitative inquiry pertains to the appropriateness of the sample composition and size . It is an important consideration in evaluations of the quality and trustworthiness of much qualitative research [ 1 ] and is implicated – particularly for research that is situated within a post-positivist tradition and retains a degree of commitment to realist ontological premises – in appraisals of validity and generalizability [ 2 , 3 , 4 , 5 ].

Samples in qualitative research tend to be small in order to support the depth of case-oriented analysis that is fundamental to this mode of inquiry [ 5 ]. Additionally, qualitative samples are purposive, that is, selected by virtue of their capacity to provide richly-textured information, relevant to the phenomenon under investigation. As a result, purposive sampling [ 6 , 7 ] – as opposed to probability sampling employed in quantitative research – selects ‘information-rich’ cases [ 8 ]. Indeed, recent research demonstrates the greater efficiency of purposive sampling compared to random sampling in qualitative studies [ 9 ], supporting related assertions long put forward by qualitative methodologists.

Sample size in qualitative research has been the subject of enduring discussions [ 4 , 10 , 11 ]. Whilst the quantitative research community has established relatively straightforward statistics-based rules to set sample sizes precisely, the intricacies of qualitative sample size determination and assessment arise from the methodological, theoretical, epistemological, and ideological pluralism that characterises qualitative inquiry (for a discussion focused on the discipline of psychology see [ 12 ]). This mitigates against clear-cut guidelines, invariably applied. Despite these challenges, various conceptual developments have sought to address this issue, with guidance and principles [ 4 , 10 , 11 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 ], and more recently, an evidence-based approach to sample size determination seeks to ground the discussion empirically [ 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 ].

Focusing on single-interview-per-participant qualitative designs, the present study aims to further contribute to the dialogue of sample size in qualitative research by offering empirical evidence around justification practices associated with sample size. We next review the existing conceptual and empirical literature on sample size determination.

Sample size in qualitative research: Conceptual developments and empirical investigations

Qualitative research experts argue that there is no straightforward answer to the question of ‘how many’ and that sample size is contingent on a number of factors relating to epistemological, methodological and practical issues [ 36 ]. Sandelowski [ 4 ] recommends that qualitative sample sizes are large enough to allow the unfolding of a ‘new and richly textured understanding’ of the phenomenon under study, but small enough so that the ‘deep, case-oriented analysis’ (p. 183) of qualitative data is not precluded. Morse [ 11 ] posits that the more useable data are collected from each person, the fewer participants are needed. She invites researchers to take into account parameters, such as the scope of study, the nature of topic (i.e. complexity, accessibility), the quality of data, and the study design. Indeed, the level of structure of questions in qualitative interviewing has been found to influence the richness of data generated [ 37 ], and so, requires attention; empirical research shows that open questions, which are asked later on in the interview, tend to produce richer data [ 37 ].

Beyond such guidance, specific numerical recommendations have also been proffered, often based on experts’ experience of qualitative research. For example, Green and Thorogood [ 38 ] maintain that the experience of most qualitative researchers conducting an interview-based study with a fairly specific research question is that little new information is generated after interviewing 20 people or so belonging to one analytically relevant participant ‘category’ (pp. 102–104). Ritchie et al. [ 39 ] suggest that studies employing individual interviews conduct no more than 50 interviews so that researchers are able to manage the complexity of the analytic task. Similarly, Britten [ 40 ] notes that large interview studies will often comprise of 50 to 60 people. Experts have also offered numerical guidelines tailored to different theoretical and methodological traditions and specific research approaches, e.g. grounded theory, phenomenology [ 11 , 41 ]. More recently, a quantitative tool was proposed [ 42 ] to support a priori sample size determination based on estimates of the prevalence of themes in the population. Nevertheless, this more formulaic approach raised criticisms relating to assumptions about the conceptual [ 43 ] and ontological status of ‘themes’ [ 44 ] and the linearity ascribed to the processes of sampling, data collection and data analysis [ 45 ].

In terms of principles, Lincoln and Guba [ 17 ] proposed that sample size determination be guided by the criterion of informational redundancy , that is, sampling can be terminated when no new information is elicited by sampling more units. Following the logic of informational comprehensiveness Malterud et al. [ 18 ] introduced the concept of information power as a pragmatic guiding principle, suggesting that the more information power the sample provides, the smaller the sample size needs to be, and vice versa.

Undoubtedly, the most widely used principle for determining sample size and evaluating its sufficiency is that of saturation . The notion of saturation originates in grounded theory [ 15 ] – a qualitative methodological approach explicitly concerned with empirically-derived theory development – and is inextricably linked to theoretical sampling. Theoretical sampling describes an iterative process of data collection, data analysis and theory development whereby data collection is governed by emerging theory rather than predefined characteristics of the population. Grounded theory saturation (often called theoretical saturation) concerns the theoretical categories – as opposed to data – that are being developed and becomes evident when ‘gathering fresh data no longer sparks new theoretical insights, nor reveals new properties of your core theoretical categories’ [ 46 p. 113]. Saturation in grounded theory, therefore, does not equate to the more common focus on data repetition and moves beyond a singular focus on sample size as the justification of sampling adequacy [ 46 , 47 ]. Sample size in grounded theory cannot be determined a priori as it is contingent on the evolving theoretical categories.

Saturation – often under the terms of ‘data’ or ‘thematic’ saturation – has diffused into several qualitative communities beyond its origins in grounded theory. Alongside the expansion of its meaning, being variously equated with ‘no new data’, ‘no new themes’, and ‘no new codes’, saturation has emerged as the ‘gold standard’ in qualitative inquiry [ 2 , 26 ]. Nevertheless, and as Morse [ 48 ] asserts, whilst saturation is the most frequently invoked ‘guarantee of qualitative rigor’, ‘it is the one we know least about’ (p. 587). Certainly researchers caution that saturation is less applicable to, or appropriate for, particular types of qualitative research (e.g. conversation analysis, [ 49 ]; phenomenological research, [ 50 ]) whilst others reject the concept altogether [ 19 , 51 ].

Methodological studies in this area aim to provide guidance about saturation and develop a practical application of processes that ‘operationalise’ and evidence saturation. Guest, Bunce, and Johnson [ 26 ] analysed 60 interviews and found that saturation of themes was reached by the twelfth interview. They noted that their sample was relatively homogeneous, their research aims focused, so studies of more heterogeneous samples and with a broader scope would be likely to need a larger size to achieve saturation. Extending the enquiry to multi-site, cross-cultural research, Hagaman and Wutich [ 28 ] showed that sample sizes of 20 to 40 interviews were required to achieve data saturation of meta-themes that cut across research sites. In a theory-driven content analysis, Francis et al. [ 25 ] reached data saturation at the 17th interview for all their pre-determined theoretical constructs. The authors further proposed two main principles upon which specification of saturation be based: (a) researchers should a priori specify an initial analysis sample (e.g. 10 interviews) which will be used for the first round of analysis and (b) a stopping criterion , that is, a number of interviews (e.g. 3) that needs to be further conducted, the analysis of which will not yield any new themes or ideas. For greater transparency, Francis et al. [ 25 ] recommend that researchers present cumulative frequency graphs supporting their judgment that saturation was achieved. A comparative method for themes saturation (CoMeTS) has also been suggested [ 23 ] whereby the findings of each new interview are compared with those that have already emerged and if it does not yield any new theme, the ‘saturated terrain’ is assumed to have been established. Because the order in which interviews are analysed can influence saturation thresholds depending on the richness of the data, Constantinou et al. [ 23 ] recommend reordering and re-analysing interviews to confirm saturation. Hennink, Kaiser and Marconi’s [ 29 ] methodological study sheds further light on the problem of specifying and demonstrating saturation. Their analysis of interview data showed that code saturation (i.e. the point at which no additional issues are identified) was achieved at 9 interviews, but meaning saturation (i.e. the point at which no further dimensions, nuances, or insights of issues are identified) required 16–24 interviews. Although breadth can be achieved relatively soon, especially for high-prevalence and concrete codes, depth requires additional data, especially for codes of a more conceptual nature.

Critiquing the concept of saturation, Nelson [ 19 ] proposes five conceptual depth criteria in grounded theory projects to assess the robustness of the developing theory: (a) theoretical concepts should be supported by a wide range of evidence drawn from the data; (b) be demonstrably part of a network of inter-connected concepts; (c) demonstrate subtlety; (d) resonate with existing literature; and (e) can be successfully submitted to tests of external validity.

Other work has sought to examine practices of sample size reporting and sufficiency assessment across a range of disciplinary fields and research domains, from nutrition [ 34 ] and health education [ 32 ], to education and the health sciences [ 22 , 27 ], information systems [ 30 ], organisation and workplace studies [ 33 ], human computer interaction [ 21 ], and accounting studies [ 24 ]. Others investigated PhD qualitative studies [ 31 ] and grounded theory studies [ 35 ]. Incomplete and imprecise sample size reporting is commonly pinpointed by these investigations whilst assessment and justifications of sample size sufficiency are even more sporadic.

Sobal [ 34 ] examined the sample size of qualitative studies published in the Journal of Nutrition Education over a period of 30 years. Studies that employed individual interviews ( n  = 30) had an average sample size of 45 individuals and none of these explicitly reported whether their sample size sought and/or attained saturation. A minority of articles discussed how sample-related limitations (with the latter most often concerning the type of sample, rather than the size) limited generalizability. A further systematic analysis [ 32 ] of health education research over 20 years demonstrated that interview-based studies averaged 104 participants (range 2 to 720 interviewees). However, 40% did not report the number of participants. An examination of 83 qualitative interview studies in leading information systems journals [ 30 ] indicated little defence of sample sizes on the basis of recommendations by qualitative methodologists, prior relevant work, or the criterion of saturation. Rather, sample size seemed to correlate with factors such as the journal of publication or the region of study (US vs Europe vs Asia). These results led the authors to call for more rigor in determining and reporting sample size in qualitative information systems research and to recommend optimal sample size ranges for grounded theory (i.e. 20–30 interviews) and single case (i.e. 15–30 interviews) projects.

Similarly, fewer than 10% of articles in organisation and workplace studies provided a sample size justification relating to existing recommendations by methodologists, prior relevant work, or saturation [ 33 ], whilst only 17% of focus groups studies in health-related journals provided an explanation of sample size (i.e. number of focus groups), with saturation being the most frequently invoked argument, followed by published sample size recommendations and practical reasons [ 22 ]. The notion of saturation was also invoked by 11 out of the 51 most highly cited studies that Guetterman [ 27 ] reviewed in the fields of education and health sciences, of which six were grounded theory studies, four phenomenological and one a narrative inquiry. Finally, analysing 641 interview-based articles in accounting, Dai et al. [ 24 ] called for more rigor since a significant minority of studies did not report precise sample size.

Despite increasing attention to rigor in qualitative research (e.g. [ 52 ]) and more extensive methodological and analytical disclosures that seek to validate qualitative work [ 24 ], sample size reporting and sufficiency assessment remain inconsistent and partial, if not absent, across a range of research domains.

Objectives of the present study

The present study sought to enrich existing systematic analyses of the customs and practices of sample size reporting and justification by focusing on qualitative research relating to health. Additionally, this study attempted to expand previous empirical investigations by examining how qualitative sample sizes are characterised and discussed in academic narratives. Qualitative health research is an inter-disciplinary field that due to its affiliation with medical sciences, often faces views and positions reflective of a quantitative ethos. Thus qualitative health research constitutes an emblematic case that may help to unfold underlying philosophical and methodological differences across the scientific community that are crystallised in considerations of sample size. The present research, therefore, incorporates a comparative element on the basis of three different disciplines engaging with qualitative health research: medicine, psychology, and sociology. We chose to focus our analysis on single-per-participant-interview designs as this not only presents a popular and widespread methodological choice in qualitative health research, but also as the method where consideration of sample size – defined as the number of interviewees – is particularly salient.

Study design

A structured search for articles reporting cross-sectional, interview-based qualitative studies was carried out and eligible reports were systematically reviewed and analysed employing both quantitative and qualitative analytic techniques.

We selected journals which (a) follow a peer review process, (b) are considered high quality and influential in their field as reflected in journal metrics, and (c) are receptive to, and publish, qualitative research (Additional File  1 presents the journals’ editorial positions in relation to qualitative research and sample considerations where available). Three health-related journals were chosen, each representing a different disciplinary field; the British Medical Journal (BMJ) representing medicine, the British Journal of Health Psychology (BJHP) representing psychology, and the Sociology of Health & Illness (SHI) representing sociology.

Search strategy to identify studies

Employing the search function of each individual journal, we used the terms ‘interview*’ AND ‘qualitative’ and limited the results to articles published between 1 January 2003 and 22 September 2017 (i.e. a 15-year review period).

Eligibility criteria

To be eligible for inclusion in the review, the article had to report a cross-sectional study design. Longitudinal studies were thus excluded whilst studies conducted within a broader research programme (e.g. interview studies nested in a trial, as part of a broader ethnography, as part of a longitudinal research) were included if they reported only single-time qualitative interviews. The method of data collection had to be individual, synchronous qualitative interviews (i.e. group interviews, structured interviews and e-mail interviews over a period of time were excluded), and the data had to be analysed qualitatively (i.e. studies that quantified their qualitative data were excluded). Mixed method studies and articles reporting more than one qualitative method of data collection (e.g. individual interviews and focus groups) were excluded. Figure  1 , a PRISMA flow diagram [ 53 ], shows the number of: articles obtained from the searches and screened; papers assessed for eligibility; and articles included in the review (Additional File  2 provides the full list of articles included in the review and their unique identifying code – e.g. BMJ01, BJHP02, SHI03). One review author (KV) assessed the eligibility of all papers identified from the searches. When in doubt, discussions about retaining or excluding articles were held between KV and JB in regular meetings, and decisions were jointly made.

figure 1

PRISMA flow diagram

Data extraction and analysis

A data extraction form was developed (see Additional File  3 ) recording three areas of information: (a) information about the article (e.g. authors, title, journal, year of publication etc.); (b) information about the aims of the study, the sample size and any justification for this, the participant characteristics, the sampling technique and any sample-related observations or comments made by the authors; and (c) information about the method or technique(s) of data analysis, the number of researchers involved in the analysis, the potential use of software, and any discussion around epistemological considerations. The Abstract, Methods and Discussion (and/or Conclusion) sections of each article were examined by one author (KV) who extracted all the relevant information. This was directly copied from the articles and, when appropriate, comments, notes and initial thoughts were written down.

To examine the kinds of sample size justifications provided by articles, an inductive content analysis [ 54 ] was initially conducted. On the basis of this analysis, the categories that expressed qualitatively different sample size justifications were developed.

We also extracted or coded quantitative data regarding the following aspects:

Journal and year of publication

Number of interviews

Number of participants

Presence of sample size justification(s) (Yes/No)

Presence of a particular sample size justification category (Yes/No), and

Number of sample size justifications provided

Descriptive and inferential statistical analyses were used to explore these data.

A thematic analysis [ 55 ] was then performed on all scientific narratives that discussed or commented on the sample size of the study. These narratives were evident both in papers that justified their sample size and those that did not. To identify these narratives, in addition to the methods sections, the discussion sections of the reviewed articles were also examined and relevant data were extracted and analysed.

In total, 214 articles – 21 in the BMJ, 53 in the BJHP and 140 in the SHI – were eligible for inclusion in the review. Table  1 provides basic information about the sample sizes – measured in number of interviews – of the studies reviewed across the three journals. Figure  2 depicts the number of eligible articles published each year per journal.

figure 2

The publication of qualitative studies in the BMJ was significantly reduced from 2012 onwards and this appears to coincide with the initiation of the BMJ Open to which qualitative studies were possibly directed.

Pairwise comparisons following a significant Kruskal-Wallis Footnote 2 test indicated that the studies published in the BJHP had significantly ( p  < .001) smaller samples sizes than those published either in the BMJ or the SHI. Sample sizes of BMJ and SHI articles did not differ significantly from each other.

Sample size justifications: Results from the quantitative and qualitative content analysis

Ten (47.6%) of the 21 BMJ studies, 26 (49.1%) of the 53 BJHP papers and 24 (17.1%) of the 140 SHI articles provided some sort of sample size justification. As shown in Table  2 , the majority of articles which justified their sample size provided one justification (70% of articles); fourteen studies (25%) provided two distinct justifications; one study (1.7%) gave three justifications and two studies (3.3%) expressed four distinct justifications.

There was no association between the number of interviews (i.e. sample size) conducted and the provision of a justification (rpb = .054, p  = .433). Within journals, Mann-Whitney tests indicated that sample sizes of ‘justifying’ and ‘non-justifying’ articles in the BMJ and SHI did not differ significantly from each other. In the BJHP, ‘justifying’ articles ( Mean rank  = 31.3) had significantly larger sample sizes than ‘non-justifying’ studies ( Mean rank  = 22.7; U = 237.000, p  < .05).

There was a significant association between the journal a paper was published in and the provision of a justification (χ 2 (2) = 23.83, p  < .001). BJHP studies provided a sample size justification significantly more often than would be expected ( z  = 2.9); SHI studies significantly less often ( z  = − 2.4). If an article was published in the BJHP, the odds of providing a justification were 4.8 times higher than if published in the SHI. Similarly if published in the BMJ, the odds of a study justifying its sample size were 4.5 times higher than in the SHI.

The qualitative content analysis of the scientific narratives identified eleven different sample size justifications. These are described below and illustrated with excerpts from relevant articles. By way of a summary, the frequency with which these were deployed across the three journals is indicated in Table  3 .

Saturation was the most commonly invoked principle (55.4% of all justifications) deployed by studies across all three journals to justify the sufficiency of their sample size. In the BMJ, two studies claimed that they achieved data saturation (BMJ17; BMJ18) and one article referred descriptively to achieving saturation without explicitly using the term (BMJ13). Interestingly, BMJ13 included data in the analysis beyond the point of saturation in search of ‘unusual/deviant observations’ and with a view to establishing findings consistency.

Thirty three women were approached to take part in the interview study. Twenty seven agreed and 21 (aged 21–64, median 40) were interviewed before data saturation was reached (one tape failure meant that 20 interviews were available for analysis). (BMJ17). No new topics were identified following analysis of approximately two thirds of the interviews; however, all interviews were coded in order to develop a better understanding of how characteristic the views and reported behaviours were, and also to collect further examples of unusual/deviant observations. (BMJ13).

Two articles reported pre-determining their sample size with a view to achieving data saturation (BMJ08 – see extract in section In line with existing research ; BMJ15 – see extract in section Pragmatic considerations ) without further specifying if this was achieved. One paper claimed theoretical saturation (BMJ06) conceived as being when “no further recurring themes emerging from the analysis” whilst another study argued that although the analytic categories were highly saturated, it was not possible to determine whether theoretical saturation had been achieved (BMJ04). One article (BMJ18) cited a reference to support its position on saturation.

In the BJHP, six articles claimed that they achieved data saturation (BJHP21; BJHP32; BJHP39; BJHP48; BJHP49; BJHP52) and one article stated that, given their sample size and the guidelines for achieving data saturation, it anticipated that saturation would be attained (BJHP50).

Recruitment continued until data saturation was reached, defined as the point at which no new themes emerged. (BJHP48). It has previously been recommended that qualitative studies require a minimum sample size of at least 12 to reach data saturation (Clarke & Braun, 2013; Fugard & Potts, 2014; Guest, Bunce, & Johnson, 2006) Therefore, a sample of 13 was deemed sufficient for the qualitative analysis and scale of this study. (BJHP50).

Two studies argued that they achieved thematic saturation (BJHP28 – see extract in section Sample size guidelines ; BJHP31) and one (BJHP30) article, explicitly concerned with theory development and deploying theoretical sampling, claimed both theoretical and data saturation.

The final sample size was determined by thematic saturation, the point at which new data appears to no longer contribute to the findings due to repetition of themes and comments by participants (Morse, 1995). At this point, data generation was terminated. (BJHP31).

Five studies argued that they achieved (BJHP05; BJHP33; BJHP40; BJHP13 – see extract in section Pragmatic considerations ) or anticipated (BJHP46) saturation without any further specification of the term. BJHP17 referred descriptively to a state of achieved saturation without specifically using the term. Saturation of coding , but not saturation of themes, was claimed to have been reached by one article (BJHP18). Two articles explicitly stated that they did not achieve saturation; instead claiming a level of theme completeness (BJHP27) or that themes being replicated (BJHP53) were arguments for sufficiency of their sample size.

Furthermore, data collection ceased on pragmatic grounds rather than at the point when saturation point was reached. Despite this, although nuances within sub-themes were still emerging towards the end of data analysis, the themes themselves were being replicated indicating a level of completeness. (BJHP27).

Finally, one article criticised and explicitly renounced the notion of data saturation claiming that, on the contrary, the criterion of theoretical sufficiency determined its sample size (BJHP16).

According to the original Grounded Theory texts, data collection should continue until there are no new discoveries ( i.e. , ‘data saturation’; Glaser & Strauss, 1967). However, recent revisions of this process have discussed how it is rare that data collection is an exhaustive process and researchers should rely on how well their data are able to create a sufficient theoretical account or ‘theoretical sufficiency’ (Dey, 1999). For this study, it was decided that theoretical sufficiency would guide recruitment, rather than looking for data saturation. (BJHP16).

Ten out of the 20 BJHP articles that employed the argument of saturation used one or more citations relating to this principle.

In the SHI, one article (SHI01) claimed that it achieved category saturation based on authors’ judgment.

This number was not fixed in advance, but was guided by the sampling strategy and the judgement, based on the analysis of the data, of the point at which ‘category saturation’ was achieved. (SHI01).

Three articles described a state of achieved saturation without using the term or specifying what sort of saturation they had achieved (i.e. data, theoretical, thematic saturation) (SHI04; SHI13; SHI30) whilst another four articles explicitly stated that they achieved saturation (SHI100; SHI125; SHI136; SHI137). Two papers stated that they achieved data saturation (SHI73 – see extract in section Sample size guidelines ; SHI113), two claimed theoretical saturation (SHI78; SHI115) and two referred to achieving thematic saturation (SHI87; SHI139) or to saturated themes (SHI29; SHI50).

Recruitment and analysis ceased once theoretical saturation was reached in the categories described below (Lincoln and Guba 1985). (SHI115). The respondents’ quotes drawn on below were chosen as representative, and illustrate saturated themes. (SHI50).

One article stated that thematic saturation was anticipated with its sample size (SHI94). Briefly referring to the difficulty in pinpointing achievement of theoretical saturation, SHI32 (see extract in section Richness and volume of data ) defended the sufficiency of its sample size on the basis of “the high degree of consensus [that] had begun to emerge among those interviewed”, suggesting that information from interviews was being replicated. Finally, SHI112 (see extract in section Further sampling to check findings consistency ) argued that it achieved saturation of discursive patterns . Seven of the 19 SHI articles cited references to support their position on saturation (see Additional File  4 for the full list of citations used by articles to support their position on saturation across the three journals).

Overall, it is clear that the concept of saturation encompassed a wide range of variants expressed in terms such as saturation, data saturation, thematic saturation, theoretical saturation, category saturation, saturation of coding, saturation of discursive themes, theme completeness. It is noteworthy, however, that although these various claims were sometimes supported with reference to the literature, they were not evidenced in relation to the study at hand.

Pragmatic considerations

The determination of sample size on the basis of pragmatic considerations was the second most frequently invoked argument (9.6% of all justifications) appearing in all three journals. In the BMJ, one article (BMJ15) appealed to pragmatic reasons, relating to time constraints and the difficulty to access certain study populations, to justify the determination of its sample size.

On the basis of the researchers’ previous experience and the literature, [30, 31] we estimated that recruitment of 15–20 patients at each site would achieve data saturation when data from each site were analysed separately. We set a target of seven to 10 caregivers per site because of time constraints and the anticipated difficulty of accessing caregivers at some home based care services. This gave a target sample of 75–100 patients and 35–50 caregivers overall. (BMJ15).

In the BJHP, four articles mentioned pragmatic considerations relating to time or financial constraints (BJHP27 – see extract in section Saturation ; BJHP53), the participant response rate (BJHP13), and the fixed (and thus limited) size of the participant pool from which interviewees were sampled (BJHP18).

We had aimed to continue interviewing until we had reached saturation, a point whereby further data collection would yield no further themes. In practice, the number of individuals volunteering to participate dictated when recruitment into the study ceased (15 young people, 15 parents). Nonetheless, by the last few interviews, significant repetition of concepts was occurring, suggesting ample sampling. (BJHP13).

Finally, three SHI articles explained their sample size with reference to practical aspects: time constraints and project manageability (SHI56), limited availability of respondents and project resources (SHI131), and time constraints (SHI113).

The size of the sample was largely determined by the availability of respondents and resources to complete the study. Its composition reflected, as far as practicable, our interest in how contextual factors (for example, gender relations and ethnicity) mediated the illness experience. (SHI131).

Qualities of the analysis

This sample size justification (8.4% of all justifications) was mainly employed by BJHP articles and referred to an intensive, idiographic and/or latently focused analysis, i.e. that moved beyond description. More specifically, six articles defended their sample size on the basis of an intensive analysis of transcripts and/or the idiographic focus of the study/analysis. Four of these papers (BJHP02; BJHP19; BJHP24; BJHP47) adopted an Interpretative Phenomenological Analysis (IPA) approach.

The current study employed a sample of 10 in keeping with the aim of exploring each participant’s account (Smith et al. , 1999). (BJHP19).

BJHP47 explicitly renounced the notion of saturation within an IPA approach. The other two BJHP articles conducted thematic analysis (BJHP34; BJHP38). The level of analysis – i.e. latent as opposed to a more superficial descriptive analysis – was also invoked as a justification by BJHP38 alongside the argument of an intensive analysis of individual transcripts

The resulting sample size was at the lower end of the range of sample sizes employed in thematic analysis (Braun & Clarke, 2013). This was in order to enable significant reflection, dialogue, and time on each transcript and was in line with the more latent level of analysis employed, to identify underlying ideas, rather than a more superficial descriptive analysis (Braun & Clarke, 2006). (BJHP38).

Finally, one BMJ paper (BMJ21) defended its sample size with reference to the complexity of the analytic task.

We stopped recruitment when we reached 30–35 interviews, owing to the depth and duration of interviews, richness of data, and complexity of the analytical task. (BMJ21).

Meet sampling requirements

Meeting sampling requirements (7.2% of all justifications) was another argument employed by two BMJ and four SHI articles to explain their sample size. Achieving maximum variation sampling in terms of specific interviewee characteristics determined and explained the sample size of two BMJ studies (BMJ02; BMJ16 – see extract in section Meet research design requirements ).

Recruitment continued until sampling frame requirements were met for diversity in age, sex, ethnicity, frequency of attendance, and health status. (BMJ02).

Regarding the SHI articles, two papers explained their numbers on the basis of their sampling strategy (SHI01- see extract in section Saturation ; SHI23) whilst sampling requirements that would help attain sample heterogeneity in terms of a particular characteristic of interest was cited by one paper (SHI127).

The combination of matching the recruitment sites for the quantitative research and the additional purposive criteria led to 104 phase 2 interviews (Internet (OLC): 21; Internet (FTF): 20); Gyms (FTF): 23; HIV testing (FTF): 20; HIV treatment (FTF): 20.) (SHI23). Of the fifty interviews conducted, thirty were translated from Spanish into English. These thirty, from which we draw our findings, were chosen for translation based on heterogeneity in depressive symptomology and educational attainment. (SHI127).

Finally, the pre-determination of sample size on the basis of sampling requirements was stated by one article though this was not used to justify the number of interviews (SHI10).

Sample size guidelines

Five BJHP articles (BJHP28; BJHP38 – see extract in section Qualities of the analysis ; BJHP46; BJHP47; BJHP50 – see extract in section Saturation ) and one SHI paper (SHI73) relied on citing existing sample size guidelines or norms within research traditions to determine and subsequently defend their sample size (7.2% of all justifications).

Sample size guidelines suggested a range between 20 and 30 interviews to be adequate (Creswell, 1998). Interviewer and note taker agreed that thematic saturation, the point at which no new concepts emerge from subsequent interviews (Patton, 2002), was achieved following completion of 20 interviews. (BJHP28). Interviewing continued until we deemed data saturation to have been reached (the point at which no new themes were emerging). Researchers have proposed 30 as an approximate or working number of interviews at which one could expect to be reaching theoretical saturation when using a semi-structured interview approach (Morse 2000), although this can vary depending on the heterogeneity of respondents interviewed and complexity of the issues explored. (SHI73).

In line with existing research

Sample sizes of published literature in the area of the subject matter under investigation (3.5% of all justifications) were used by 2 BMJ articles as guidance and a precedent for determining and defending their own sample size (BMJ08; BMJ15 – see extract in section Pragmatic considerations ).

We drew participants from a list of prisoners who were scheduled for release each week, sampling them until we reached the target of 35 cases, with a view to achieving data saturation within the scope of the study and sufficient follow-up interviews and in line with recent studies [8–10]. (BMJ08).

Similarly, BJHP38 (see extract in section Qualities of the analysis ) claimed that its sample size was within the range of sample sizes of published studies that use its analytic approach.

Richness and volume of data

BMJ21 (see extract in section Qualities of the analysis ) and SHI32 referred to the richness, detailed nature, and volume of data collected (2.3% of all justifications) to justify the sufficiency of their sample size.

Although there were more potential interviewees from those contacted by postcode selection, it was decided to stop recruitment after the 10th interview and focus on analysis of this sample. The material collected was considerable and, given the focused nature of the study, extremely detailed. Moreover, a high degree of consensus had begun to emerge among those interviewed, and while it is always difficult to judge at what point ‘theoretical saturation’ has been reached, or how many interviews would be required to uncover exception(s), it was felt the number was sufficient to satisfy the aims of this small in-depth investigation (Strauss and Corbin 1990). (SHI32).

Meet research design requirements

Determination of sample size so that it is in line with, and serves the requirements of, the research design (2.3% of all justifications) that the study adopted was another justification used by 2 BMJ papers (BMJ16; BMJ08 – see extract in section In line with existing research ).

We aimed for diverse, maximum variation samples [20] totalling 80 respondents from different social backgrounds and ethnic groups and those bereaved due to different types of suicide and traumatic death. We could have interviewed a smaller sample at different points in time (a qualitative longitudinal study) but chose instead to seek a broad range of experiences by interviewing those bereaved many years ago and others bereaved more recently; those bereaved in different circumstances and with different relations to the deceased; and people who lived in different parts of the UK; with different support systems and coroners’ procedures (see Tables 1 and 2 for more details). (BMJ16).

Researchers’ previous experience

The researchers’ previous experience (possibly referring to experience with qualitative research) was invoked by BMJ15 (see extract in section Pragmatic considerations ) as a justification for the determination of sample size.

Nature of study

One BJHP paper argued that the sample size was appropriate for the exploratory nature of the study (BJHP38).

A sample of eight participants was deemed appropriate because of the exploratory nature of this research and the focus on identifying underlying ideas about the topic. (BJHP38).

Further sampling to check findings consistency

Finally, SHI112 argued that once it had achieved saturation of discursive patterns, further sampling was decided and conducted to check for consistency of the findings.

Within each of the age-stratified groups, interviews were randomly sampled until saturation of discursive patterns was achieved. This resulted in a sample of 67 interviews. Once this sample had been analysed, one further interview from each age-stratified group was randomly chosen to check for consistency of the findings. Using this approach it was possible to more carefully explore children’s discourse about the ‘I’, agency, relationality and power in the thematic areas, revealing the subtle discursive variations described in this article. (SHI112).

Thematic analysis of passages discussing sample size

This analysis resulted in two overarching thematic areas; the first concerned the variation in the characterisation of sample size sufficiency, and the second related to the perceived threats deriving from sample size insufficiency.

Characterisations of sample size sufficiency

The analysis showed that there were three main characterisations of the sample size in the articles that provided relevant comments and discussion: (a) the vast majority of these qualitative studies ( n  = 42) considered their sample size as ‘small’ and this was seen and discussed as a limitation; only two articles viewed their small sample size as desirable and appropriate (b) a minority of articles ( n  = 4) proclaimed that their achieved sample size was ‘sufficient’; and (c) finally, a small group of studies ( n  = 5) characterised their sample size as ‘large’. Whilst achieving a ‘large’ sample size was sometimes viewed positively because it led to richer results, there were also occasions when a large sample size was problematic rather than desirable.

‘Small’ but why and for whom?

A number of articles which characterised their sample size as ‘small’ did so against an implicit or explicit quantitative framework of reference. Interestingly, three studies that claimed to have achieved data saturation or ‘theoretical sufficiency’ with their sample size, discussed or noted as a limitation in their discussion their ‘small’ sample size, raising the question of why, or for whom, the sample size was considered small given that the qualitative criterion of saturation had been satisfied.

The current study has a number of limitations. The sample size was small (n = 11) and, however, large enough for no new themes to emerge. (BJHP39). The study has two principal limitations. The first of these relates to the small number of respondents who took part in the study. (SHI73).

Other articles appeared to accept and acknowledge that their sample was flawed because of its small size (as well as other compositional ‘deficits’ e.g. non-representativeness, biases, self-selection) or anticipated that they might be criticized for their small sample size. It seemed that the imagined audience – perhaps reviewer or reader – was one inclined to hold the tenets of quantitative research, and certainly one to whom it was important to indicate the recognition that small samples were likely to be problematic. That one’s sample might be thought small was often construed as a limitation couched in a discourse of regret or apology.

Very occasionally, the articulation of the small size as a limitation was explicitly aligned against an espoused positivist framework and quantitative research.

This study has some limitations. Firstly, the 100 incidents sample represents a small number of the total number of serious incidents that occurs every year. 26 We sent out a nationwide invitation and do not know why more people did not volunteer for the study. Our lack of epidemiological knowledge about healthcare incidents, however, means that determining an appropriate sample size continues to be difficult. (BMJ20).

Indicative of an apparent oscillation of qualitative researchers between the different requirements and protocols demarcating the quantitative and qualitative worlds, there were a few instances of articles which briefly recognised their ‘small’ sample size as a limitation, but then defended their study on more qualitative grounds, such as their ability and success at capturing the complexity of experience and delving into the idiographic, and at generating particularly rich data.

This research, while limited in size, has sought to capture some of the complexity attached to men’s attitudes and experiences concerning incomes and material circumstances. (SHI35). Our numbers are small because negotiating access to social networks was slow and labour intensive, but our methods generated exceptionally rich data. (BMJ21). This study could be criticised for using a small and unrepresentative sample. Given that older adults have been ignored in the research concerning suntanning, fair-skinned older adults are the most likely to experience skin cancer, and women privilege appearance over health when it comes to sunbathing practices, our study offers depth and richness of data in a demographic group much in need of research attention. (SHI57).

‘Good enough’ sample sizes

Only four articles expressed some degree of confidence that their achieved sample size was sufficient. For example, SHI139, in line with the justification of thematic saturation that it offered, expressed trust in its sample size sufficiency despite the poor response rate. Similarly, BJHP04, which did not provide a sample size justification, argued that it targeted a larger sample size in order to eventually recruit a sufficient number of interviewees, due to anticipated low response rate.

Twenty-three people with type I diabetes from the target population of 133 ( i.e. 17.3%) consented to participate but four did not then respond to further contacts (total N = 19). The relatively low response rate was anticipated, due to the busy life-styles of young people in the age range, the geographical constraints, and the time required to participate in a semi-structured interview, so a larger target sample allowed a sufficient number of participants to be recruited. (BJHP04).

Two other articles (BJHP35; SHI32) linked the claimed sufficiency to the scope (i.e. ‘small, in-depth investigation’), aims and nature (i.e. ‘exploratory’) of their studies, thus anchoring their numbers to the particular context of their research. Nevertheless, claims of sample size sufficiency were sometimes undermined when they were juxtaposed with an acknowledgement that a larger sample size would be more scientifically productive.

Although our sample size was sufficient for this exploratory study, a more diverse sample including participants with lower socioeconomic status and more ethnic variation would be informative. A larger sample could also ensure inclusion of a more representative range of apps operating on a wider range of platforms. (BJHP35).

‘Large’ sample sizes - Promise or peril?

Three articles (BMJ13; BJHP05; BJHP48) which all provided the justification of saturation, characterised their sample size as ‘large’ and narrated this oversufficiency in positive terms as it allowed richer data and findings and enhanced the potential for generalisation. The type of generalisation aspired to (BJHP48) was not further specified however.

This study used rich data provided by a relatively large sample of expert informants on an important but under-researched topic. (BMJ13). Qualitative research provides a unique opportunity to understand a clinical problem from the patient’s perspective. This study had a large diverse sample, recruited through a range of locations and used in-depth interviews which enhance the richness and generalizability of the results. (BJHP48).

And whilst a ‘large’ sample size was endorsed and valued by some qualitative researchers, within the psychological tradition of IPA, a ‘large’ sample size was counter-normative and therefore needed to be justified. Four BJHP studies, all adopting IPA, expressed the appropriateness or desirability of ‘small’ sample sizes (BJHP41; BJHP45) or hastened to explain why they included a larger than typical sample size (BJHP32; BJHP47). For example, BJHP32 below provides a rationale for how an IPA study can accommodate a large sample size and how this was indeed suitable for the purposes of the particular research. To strengthen the explanation for choosing a non-normative sample size, previous IPA research citing a similar sample size approach is used as a precedent.

Small scale IPA studies allow in-depth analysis which would not be possible with larger samples (Smith et al. , 2009). (BJHP41). Although IPA generally involves intense scrutiny of a small number of transcripts, it was decided to recruit a larger diverse sample as this is the first qualitative study of this population in the United Kingdom (as far as we know) and we wanted to gain an overview. Indeed, Smith, Flowers, and Larkin (2009) agree that IPA is suitable for larger groups. However, the emphasis changes from an in-depth individualistic analysis to one in which common themes from shared experiences of a group of people can be elicited and used to understand the network of relationships between themes that emerge from the interviews. This large-scale format of IPA has been used by other researchers in the field of false-positive research. Baillie, Smith, Hewison, and Mason (2000) conducted an IPA study, with 24 participants, of ultrasound screening for chromosomal abnormality; they found that this larger number of participants enabled them to produce a more refined and cohesive account. (BJHP32).

The IPA articles found in the BJHP were the only instances where a ‘small’ sample size was advocated and a ‘large’ sample size problematized and defended. These IPA studies illustrate that the characterisation of sample size sufficiency can be a function of researchers’ theoretical and epistemological commitments rather than the result of an ‘objective’ sample size assessment.

Threats from sample size insufficiency

As shown above, the majority of articles that commented on their sample size, simultaneously characterized it as small and problematic. On those occasions that authors did not simply cite their ‘small’ sample size as a study limitation but rather continued and provided an account of how and why a small sample size was problematic, two important scientific qualities of the research seemed to be threatened: the generalizability and validity of results.

Generalizability

Those who characterised their sample as ‘small’ connected this to the limited potential for generalization of the results. Other features related to the sample – often some kind of compositional particularity – were also linked to limited potential for generalisation. Though not always explicitly articulated to what form of generalisation the articles referred to (see BJHP09), generalisation was mostly conceived in nomothetic terms, that is, it concerned the potential to draw inferences from the sample to the broader study population (‘representational generalisation’ – see BJHP31) and less often to other populations or cultures.

It must be noted that samples are small and whilst in both groups the majority of those women eligible participated, generalizability cannot be assumed. (BJHP09). The study’s limitations should be acknowledged: Data are presented from interviews with a relatively small group of participants, and thus, the views are not necessarily generalizable to all patients and clinicians. In particular, patients were only recruited from secondary care services where COFP diagnoses are typically confirmed. The sample therefore is unlikely to represent the full spectrum of patients, particularly those who are not referred to, or who have been discharged from dental services. (BJHP31).

Without explicitly using the term generalisation, two SHI articles noted how their ‘small’ sample size imposed limits on ‘the extent that we can extrapolate from these participants’ accounts’ (SHI114) or to the possibility ‘to draw far-reaching conclusions from the results’ (SHI124).

Interestingly, only a minority of articles alluded to, or invoked, a type of generalisation that is aligned with qualitative research, that is, idiographic generalisation (i.e. generalisation that can be made from and about cases [ 5 ]). These articles, all published in the discipline of sociology, defended their findings in terms of the possibility of drawing logical and conceptual inferences to other contexts and of generating understanding that has the potential to advance knowledge, despite their ‘small’ size. One article (SHI139) clearly contrasted nomothetic (statistical) generalisation to idiographic generalisation, arguing that the lack of statistical generalizability does not nullify the ability of qualitative research to still be relevant beyond the sample studied.

Further, these data do not need to be statistically generalisable for us to draw inferences that may advance medicalisation analyses (Charmaz 2014). These data may be seen as an opportunity to generate further hypotheses and are a unique application of the medicalisation framework. (SHI139). Although a small-scale qualitative study related to school counselling, this analysis can be usefully regarded as a case study of the successful utilisation of mental health-related resources by adolescents. As many of the issues explored are of relevance to mental health stigma more generally, it may also provide insights into adult engagement in services. It shows how a sociological analysis, which uses positioning theory to examine how people negotiate, partially accept and simultaneously resist stigmatisation in relation to mental health concerns, can contribute to an elucidation of the social processes and narrative constructions which may maintain as well as bridge the mental health service gap. (SHI103).

Only one article (SHI30) used the term transferability to argue for the potential of wider relevance of the results which was thought to be more the product of the composition of the sample (i.e. diverse sample), rather than the sample size.

The second major concern that arose from a ‘small’ sample size pertained to the internal validity of findings (i.e. here the term is used to denote the ‘truth’ or credibility of research findings). Authors expressed uncertainty about the degree of confidence in particular aspects or patterns of their results, primarily those that concerned some form of differentiation on the basis of relevant participant characteristics.

The information source preferred seemed to vary according to parents’ education; however, the sample size is too small to draw conclusions about such patterns. (SHI80). Although our numbers were too small to demonstrate gender differences with any certainty, it does seem that the biomedical and erotic scripts may be more common in the accounts of men and the relational script more common in the accounts of women. (SHI81).

In other instances, articles expressed uncertainty about whether their results accounted for the full spectrum and variation of the phenomenon under investigation. In other words, a ‘small’ sample size (alongside compositional ‘deficits’ such as a not statistically representative sample) was seen to threaten the ‘content validity’ of the results which in turn led to constructions of the study conclusions as tentative.

Data collection ceased on pragmatic grounds rather than when no new information appeared to be obtained ( i.e. , saturation point). As such, care should be taken not to overstate the findings. Whilst the themes from the initial interviews seemed to be replicated in the later interviews, further interviews may have identified additional themes or provided more nuanced explanations. (BJHP53). …it should be acknowledged that this study was based on a small sample of self-selected couples in enduring marriages who were not broadly representative of the population. Thus, participants may not be representative of couples that experience postnatal PTSD. It is therefore unlikely that all the key themes have been identified and explored. For example, couples who were excluded from the study because the male partner declined to participate may have been experiencing greater interpersonal difficulties. (BJHP03).

In other instances, articles attempted to preserve a degree of credibility of their results, despite the recognition that the sample size was ‘small’. Clarity and sharpness of emerging themes and alignment with previous relevant work were the arguments employed to warrant the validity of the results.

This study focused on British Chinese carers of patients with affective disorders, using a qualitative methodology to synthesise the sociocultural representations of illness within this community. Despite the small sample size, clear themes emerged from the narratives that were sufficient for this exploratory investigation. (SHI98).

The present study sought to examine how qualitative sample sizes in health-related research are characterised and justified. In line with previous studies [ 22 , 30 , 33 , 34 ] the findings demonstrate that reporting of sample size sufficiency is limited; just over 50% of articles in the BMJ and BJHP and 82% in the SHI did not provide any sample size justification. Providing a sample size justification was not related to the number of interviews conducted, but it was associated with the journal that the article was published in, indicating the influence of disciplinary or publishing norms, also reported in prior research [ 30 ]. This lack of transparency about sample size sufficiency is problematic given that most qualitative researchers would agree that it is an important marker of quality [ 56 , 57 ]. Moreover, and with the rise of qualitative research in social sciences, efforts to synthesise existing evidence and assess its quality are obstructed by poor reporting [ 58 , 59 ].

When authors justified their sample size, our findings indicate that sufficiency was mostly appraised with reference to features that were intrinsic to the study, in agreement with general advice on sample size determination [ 4 , 11 , 36 ]. The principle of saturation was the most commonly invoked argument [ 22 ] accounting for 55% of all justifications. A wide range of variants of saturation was evident corroborating the proliferation of the meaning of the term [ 49 ] and reflecting different underlying conceptualisations or models of saturation [ 20 ]. Nevertheless, claims of saturation were never substantiated in relation to procedures conducted in the study itself, endorsing similar observations in the literature [ 25 , 30 , 47 ]. Claims of saturation were sometimes supported with citations of other literature, suggesting a removal of the concept away from the characteristics of the study at hand. Pragmatic considerations, such as resource constraints or participant response rate and availability, was the second most frequently used argument accounting for approximately 10% of justifications and another 23% of justifications also represented intrinsic-to-the-study characteristics (i.e. qualities of the analysis, meeting sampling or research design requirements, richness and volume of the data obtained, nature of study, further sampling to check findings consistency).

Only, 12% of mentions of sample size justification pertained to arguments that were external to the study at hand, in the form of existing sample size guidelines and prior research that sets precedents. Whilst community norms and prior research can establish useful rules of thumb for estimating sample sizes [ 60 ] – and reveal what sizes are more likely to be acceptable within research communities – researchers should avoid adopting these norms uncritically, especially when such guidelines [e.g. 30 , 35 ], might be based on research that does not provide adequate evidence of sample size sufficiency. Similarly, whilst methodological research that seeks to demonstrate the achievement of saturation is invaluable since it explicates the parameters upon which saturation is contingent and indicates when a research project is likely to require a smaller or a larger sample [e.g. 29 ], specific numbers at which saturation was achieved within these projects cannot be routinely extrapolated for other projects. We concur with existing views [ 11 , 36 ] that the consideration of the characteristics of the study at hand, such as the epistemological and theoretical approach, the nature of the phenomenon under investigation, the aims and scope of the study, the quality and richness of data, or the researcher’s experience and skills of conducting qualitative research, should be the primary guide in determining sample size and assessing its sufficiency.

Moreover, although numbers in qualitative research are not unimportant [ 61 ], sample size should not be considered alone but be embedded in the more encompassing examination of data adequacy [ 56 , 57 ]. Erickson’s [ 62 ] dimensions of ‘evidentiary adequacy’ are useful here. He explains the concept in terms of adequate amounts of evidence, adequate variety in kinds of evidence, adequate interpretive status of evidence, adequate disconfirming evidence, and adequate discrepant case analysis. All dimensions might not be relevant across all qualitative research designs, but this illustrates the thickness of the concept of data adequacy, taking it beyond sample size.

The present research also demonstrated that sample sizes were commonly seen as ‘small’ and insufficient and discussed as limitation. Often unjustified (and in two cases incongruent with their own claims of saturation) these findings imply that sample size in qualitative health research is often adversely judged (or expected to be judged) against an implicit, yet omnipresent, quasi-quantitative standpoint. Indeed there were a few instances in our data where authors appeared, possibly in response to reviewers, to resist to some sort of quantification of their results. This implicit reference point became more apparent when authors discussed the threats deriving from an insufficient sample size. Whilst the concerns about internal validity might be legitimate to the extent that qualitative research projects, which are broadly related to realism, are set to examine phenomena in sufficient breadth and depth, the concerns around generalizability revealed a conceptualisation that is not compatible with purposive sampling. The limited potential for generalisation, as a result of a small sample size, was often discussed in nomothetic, statistical terms. Only occasionally was analytic or idiographic generalisation invoked to warrant the value of the study’s findings [ 5 , 17 ].

Strengths and limitations of the present study

We note, first, the limited number of health-related journals reviewed, so that only a ‘snapshot’ of qualitative health research has been captured. Examining additional disciplines (e.g. nursing sciences) as well as inter-disciplinary journals would add to the findings of this analysis. Nevertheless, our study is the first to provide some comparative insights on the basis of disciplines that are differently attached to the legacy of positivism and analysed literature published over a lengthy period of time (15 years). Guetterman [ 27 ] also examined health-related literature but this analysis was restricted to 26 most highly cited articles published over a period of five years whilst Carlsen and Glenton’s [ 22 ] study concentrated on focus groups health research. Moreover, although it was our intention to examine sample size justification in relation to the epistemological and theoretical positions of articles, this proved to be challenging largely due to absence of relevant information, or the difficulty into discerning clearly articles’ positions [ 63 ] and classifying them under specific approaches (e.g. studies often combined elements from different theoretical and epistemological traditions). We believe that such an analysis would yield useful insights as it links the methodological issue of sample size to the broader philosophical stance of the research. Despite these limitations, the analysis of the characterisation of sample size and of the threats seen to accrue from insufficient sample size, enriches our understanding of sample size (in)sufficiency argumentation by linking it to other features of the research. As the peer-review process becomes increasingly public, future research could usefully examine how reporting around sample size sufficiency and data adequacy might be influenced by the interactions between authors and reviewers.

The past decade has seen a growing appetite in qualitative research for an evidence-based approach to sample size determination and to evaluations of the sufficiency of sample size. Despite the conceptual and methodological developments in the area, the findings of the present study confirm previous studies in concluding that appraisals of sample size sufficiency are either absent or poorly substantiated. To ensure and maintain high quality research that will encourage greater appreciation of qualitative work in health-related sciences [ 64 ], we argue that qualitative researchers should be more transparent and thorough in their evaluation of sample size as part of their appraisal of data adequacy. We would encourage the practice of appraising sample size sufficiency with close reference to the study at hand and would thus caution against responding to the growing methodological research in this area with a decontextualised application of sample size numerical guidelines, norms and principles. Although researchers might find sample size community norms serve as useful rules of thumb, we recommend methodological knowledge is used to critically consider how saturation and other parameters that affect sample size sufficiency pertain to the specifics of the particular project. Those reviewing papers have a vital role in encouraging transparent study-specific reporting. The review process should support authors to exercise nuanced judgments in decisions about sample size determination in the context of the range of factors that influence sample size sufficiency and the specifics of a particular study. In light of the growing methodological evidence in the area, transparent presentation of such evidence-based judgement is crucial and in time should surely obviate the seemingly routine practice of citing the ‘small’ size of qualitative samples among the study limitations.

A non-parametric test of difference for independent samples was performed since the variable number of interviews violated assumptions of normality according to the standardized scores of skewness and kurtosis (BMJ: z skewness = 3.23, z kurtosis = 1.52; BJHP: z skewness = 4.73, z kurtosis = 4.85; SHI: z skewness = 12.04, z kurtosis = 21.72) and the Shapiro-Wilk test of normality ( p  < .001).

Abbreviations

British Journal of Health Psychology

British Medical Journal

Interpretative Phenomenological Analysis

Sociology of Health & Illness

Spencer L, Ritchie J, Lewis J, Dillon L. Quality in qualitative evaluation: a framework for assessing research evidence. National Centre for Social Research 2003 https://www.heacademy.ac.uk/system/files/166_policy_hub_a_quality_framework.pdf Accessed 11 May 2018.

Fusch PI, Ness LR. Are we there yet? Data saturation in qualitative research Qual Rep. 2015;20(9):1408–16.

Google Scholar  

Robinson OC. Sampling in interview-based qualitative research: a theoretical and practical guide. Qual Res Psychol. 2014;11(1):25–41.

Article   Google Scholar  

Sandelowski M. Sample size in qualitative research. Res Nurs Health. 1995;18(2):179–83.

Article   CAS   Google Scholar  

Sandelowski M. One is the liveliest number: the case orientation of qualitative research. Res Nurs Health. 1996;19(6):525–9.

Luborsky MR, Rubinstein RL. Sampling in qualitative research: rationale, issues. and methods Res Aging. 1995;17(1):89–113.

Marshall MN. Sampling for qualitative research. Fam Pract. 1996;13(6):522–6.

Patton MQ. Qualitative evaluation and research methods. 2nd ed. Newbury Park, CA: Sage; 1990.

van Rijnsoever FJ. (I Can’t get no) saturation: a simulation and guidelines for sample sizes in qualitative research. PLoS One. 2017;12(7):e0181689.

Morse JM. The significance of saturation. Qual Health Res. 1995;5(2):147–9.

Morse JM. Determining sample size. Qual Health Res. 2000;10(1):3–5.

Gergen KJ, Josselson R, Freeman M. The promises of qualitative inquiry. Am Psychol. 2015;70(1):1–9.

Borsci S, Macredie RD, Barnett J, Martin J, Kuljis J, Young T. Reviewing and extending the five-user assumption: a grounded procedure for interaction evaluation. ACM Trans Comput Hum Interact. 2013;20(5):29.

Borsci S, Macredie RD, Martin JL, Young T. How many testers are needed to assure the usability of medical devices? Expert Rev Med Devices. 2014;11(5):513–25.

Glaser BG, Strauss AL. The discovery of grounded theory: strategies for qualitative research. Chicago, IL: Aldine; 1967.

Kerr C, Nixon A, Wild D. Assessing and demonstrating data saturation in qualitative inquiry supporting patient-reported outcomes research. Expert Rev Pharmacoecon Outcomes Res. 2010;10(3):269–81.

Lincoln YS, Guba EG. Naturalistic inquiry. London: Sage; 1985.

Book   Google Scholar  

Malterud K, Siersma VD, Guassora AD. Sample size in qualitative interview studies: guided by information power. Qual Health Res. 2015;26:1753–60.

Nelson J. Using conceptual depth criteria: addressing the challenge of reaching saturation in qualitative research. Qual Res. 2017;17(5):554–70.

Saunders B, Sim J, Kingstone T, Baker S, Waterfield J, Bartlam B, et al. Saturation in qualitative research: exploring its conceptualization and operationalization. Qual Quant. 2017. https://doi.org/10.1007/s11135-017-0574-8 .

Caine K. Local standards for sample size at CHI. In Proceedings of the 2016 CHI conference on human factors in computing systems. 2016;981–992. ACM.

Carlsen B, Glenton C. What about N? A methodological study of sample-size reporting in focus group studies. BMC Med Res Methodol. 2011;11(1):26.

Constantinou CS, Georgiou M, Perdikogianni M. A comparative method for themes saturation (CoMeTS) in qualitative interviews. Qual Res. 2017;17(5):571–88.

Dai NT, Free C, Gendron Y. Interview-based research in accounting 2000–2014: a review. November 2016. https://ssrn.com/abstract=2711022 or https://doi.org/10.2139/ssrn.2711022 . Accessed 17 May 2018.

Francis JJ, Johnston M, Robertson C, Glidewell L, Entwistle V, Eccles MP, et al. What is an adequate sample size? Operationalising data saturation for theory-based interview studies. Psychol Health. 2010;25(10):1229–45.

Guest G, Bunce A, Johnson L. How many interviews are enough? An experiment with data saturation and variability. Field Methods. 2006;18(1):59–82.

Guetterman TC. Descriptions of sampling practices within five approaches to qualitative research in education and the health sciences. Forum Qual Soc Res. 2015;16(2):25. http://nbn-resolving.de/urn:nbn:de:0114-fqs1502256 . Accessed 17 May 2018.

Hagaman AK, Wutich A. How many interviews are enough to identify metathemes in multisited and cross-cultural research? Another perspective on guest, bunce, and Johnson’s (2006) landmark study. Field Methods. 2017;29(1):23–41.

Hennink MM, Kaiser BN, Marconi VC. Code saturation versus meaning saturation: how many interviews are enough? Qual Health Res. 2017;27(4):591–608.

Marshall B, Cardon P, Poddar A, Fontenot R. Does sample size matter in qualitative research?: a review of qualitative interviews in IS research. J Comput Inform Syst. 2013;54(1):11–22.

Mason M. Sample size and saturation in PhD studies using qualitative interviews. Forum Qual Soc Res 2010;11(3):8. http://nbn-resolving.de/urn:nbn:de:0114-fqs100387 . Accessed 17 May 2018.

Safman RM, Sobal J. Qualitative sample extensiveness in health education research. Health Educ Behav. 2004;31(1):9–21.

Saunders MN, Townsend K. Reporting and justifying the number of interview participants in organization and workplace research. Br J Manag. 2016;27(4):836–52.

Sobal J. 2001. Sample extensiveness in qualitative nutrition education research. J Nutr Educ. 2001;33(4):184–92.

Thomson SB. 2010. Sample size and grounded theory. JOAAG. 2010;5(1). http://www.joaag.com/uploads/5_1__Research_Note_1_Thomson.pdf . Accessed 17 May 2018.

Baker SE, Edwards R. How many qualitative interviews is enough?: expert voices and early career reflections on sampling and cases in qualitative research. National Centre for Research Methods Review Paper. 2012; http://eprints.ncrm.ac.uk/2273/4/how_many_interviews.pdf . Accessed 17 May 2018.

Ogden J, Cornwell D. The role of topic, interviewee, and question in predicting rich interview data in the field of health research. Sociol Health Illn. 2010;32(7):1059–71.

Green J, Thorogood N. Qualitative methods for health research. London: Sage; 2004.

Ritchie J, Lewis J, Elam G. Designing and selecting samples. In: Ritchie J, Lewis J, editors. Qualitative research practice: a guide for social science students and researchers. London: Sage; 2003. p. 77–108.

Britten N. Qualitative research: qualitative interviews in medical research. BMJ. 1995;311(6999):251–3.

Creswell JW. Qualitative inquiry and research design: choosing among five approaches. 2nd ed. London: Sage; 2007.

Fugard AJ, Potts HW. Supporting thinking on sample sizes for thematic analyses: a quantitative tool. Int J Soc Res Methodol. 2015;18(6):669–84.

Emmel N. Themes, variables, and the limits to calculating sample size in qualitative research: a response to Fugard and Potts. Int J Soc Res Methodol. 2015;18(6):685–6.

Braun V, Clarke V. (Mis) conceptualising themes, thematic analysis, and other problems with Fugard and Potts’ (2015) sample-size tool for thematic analysis. Int J Soc Res Methodol. 2016;19(6):739–43.

Hammersley M. Sampling and thematic analysis: a response to Fugard and Potts. Int J Soc Res Methodol. 2015;18(6):687–8.

Charmaz K. Constructing grounded theory: a practical guide through qualitative analysis. London: Sage; 2006.

Bowen GA. Naturalistic inquiry and the saturation concept: a research note. Qual Res. 2008;8(1):137–52.

Morse JM. Data were saturated. Qual Health Res. 2015;25(5):587–8.

O’Reilly M, Parker N. ‘Unsatisfactory saturation’: a critical exploration of the notion of saturated sample sizes in qualitative research. Qual Res. 2013;13(2):190–7.

Manen M, Higgins I, Riet P. A conversation with max van Manen on phenomenology in its original sense. Nurs Health Sci. 2016;18(1):4–7.

Dey I. Grounding grounded theory. San Francisco, CA: Academic Press; 1999.

Hays DG, Wood C, Dahl H, Kirk-Jenkins A. Methodological rigor in journal of counseling & development qualitative research articles: a 15-year review. J Couns Dev. 2016;94(2):172–83.

Moher D, Liberati A, Tetzlaff J, Altman DG, Prisma Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med 2009; 6(7): e1000097.

Hsieh HF, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res. 2005;15(9):1277–88.

Boyatzis RE. Transforming qualitative information: thematic analysis and code development. Thousand Oaks, CA: Sage; 1998.

Levitt HM, Motulsky SL, Wertz FJ, Morrow SL, Ponterotto JG. Recommendations for designing and reviewing qualitative research in psychology: promoting methodological integrity. Qual Psychol. 2017;4(1):2–22.

Morrow SL. Quality and trustworthiness in qualitative research in counseling psychology. J Couns Psychol. 2005;52(2):250–60.

Barroso J, Sandelowski M. Sample reporting in qualitative studies of women with HIV infection. Field Methods. 2003;15(4):386–404.

Glenton C, Carlsen B, Lewin S, Munthe-Kaas H, Colvin CJ, Tunçalp Ö, et al. Applying GRADE-CERQual to qualitative evidence synthesis findings—paper 5: how to assess adequacy of data. Implement Sci. 2018;13(Suppl 1):14.

Onwuegbuzie AJ. Leech NL. A call for qualitative power analyses. Qual Quant. 2007;41(1):105–21.

Sandelowski M. Real qualitative researchers do not count: the use of numbers in qualitative research. Res Nurs Health. 2001;24(3):230–40.

Erickson F. Qualitative methods in research on teaching. In: Wittrock M, editor. Handbook of research on teaching. 3rd ed. New York: Macmillan; 1986. p. 119–61.

Bradbury-Jones C, Taylor J, Herber O. How theory is used and articulated in qualitative research: development of a new typology. Soc Sci Med. 2014;120:135–41.

Greenhalgh T, Annandale E, Ashcroft R, Barlow J, Black N, Bleakley A, et al. An open letter to the BMJ editors on qualitative research. BMJ. 2016;i563:352.

Download references

Acknowledgments

We would like to thank Dr. Paula Smith and Katharine Lee for their comments on a previous draft of this paper as well as Natalie Ann Mitchell and Meron Teferra for assisting us with data extraction.

This research was initially conceived of and partly conducted with financial support from the Multidisciplinary Assessment of Technology Centre for Healthcare (MATCH) programme (EP/F063822/1 and EP/G012393/1). The research continued and was completed independent of any support. The funding body did not have any role in the study design, the collection, analysis and interpretation of the data, in the writing of the paper, and in the decision to submit the manuscript for publication. The views expressed are those of the authors alone.

Availability of data and materials

Supporting data can be accessed in the original publications. Additional File 2 lists all eligible studies that were included in the present analysis.

Author information

Authors and affiliations.

Department of Psychology, University of Bath, Building 10 West, Claverton Down, Bath, BA2 7AY, UK

Konstantina Vasileiou & Julie Barnett

School of Psychology, Newcastle University, Ridley Building 1, Queen Victoria Road, Newcastle upon Tyne, NE1 7RU, UK

Susan Thorpe

Department of Computer Science, Brunel University London, Wilfred Brown Building 108, Uxbridge, UB8 3PH, UK

Terry Young

You can also search for this author in PubMed   Google Scholar

Contributions

JB and TY conceived the study; KV, JB, and TY designed the study; KV identified the articles and extracted the data; KV and JB assessed eligibility of articles; KV, JB, ST, and TY contributed to the analysis of the data, discussed the findings and early drafts of the paper; KV developed the final manuscript; KV, JB, ST, and TY read and approved the manuscript.

Corresponding author

Correspondence to Konstantina Vasileiou .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

Terry Young is an academic who undertakes research and occasional consultancy in the areas of health technology assessment, information systems, and service design. He is unaware of any direct conflict of interest with respect to this paper. All other authors have no competing interests to declare.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional Files

Additional file 1:.

Editorial positions on qualitative research and sample considerations (where available). (DOCX 12 kb)

Additional File 2:

List of eligible articles included in the review ( N  = 214). (DOCX 38 kb)

Additional File 3:

Data Extraction Form. (DOCX 15 kb)

Additional File 4:

Citations used by articles to support their position on saturation. (DOCX 14 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Vasileiou, K., Barnett, J., Thorpe, S. et al. Characterising and justifying sample size sufficiency in interview-based studies: systematic analysis of qualitative health research over a 15-year period. BMC Med Res Methodol 18 , 148 (2018). https://doi.org/10.1186/s12874-018-0594-7

Download citation

Received : 22 May 2018

Accepted : 29 October 2018

Published : 21 November 2018

DOI : https://doi.org/10.1186/s12874-018-0594-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Sample size
  • Sample size justification
  • Sample size characterisation
  • Data adequacy
  • Qualitative health research
  • Qualitative interviews
  • Systematic analysis

BMC Medical Research Methodology

ISSN: 1471-2288

how many respondents should be in qualitative research

  • R&E Practice

Riddle me this: How many interviews (or focus groups) are enough?

how many respondents should be in qualitative research

Emily Namey

This blog post is the final in a series of three sampling-focused posts.

The first two posts in this series describe commonly used research sampling strategies and provide some guidance on how to choose from this range of sampling methods . Here we delve further into the sampling world and address sample sizes for qualitative research and evaluation projects. Specifically, we address the often-asked question: How many in-depth interviews/focus groups do I need to conduct for my study?

Within the qualitative literature (and community of practice), the concept of “saturation” – the point when incoming data produce little or no new information – is the well-accepted standard by which sample sizes for qualitative inquiry are determined ( Guest et al., 2006 ; Guest and MacQueen, 2008 ). There’s just one small problem with this: saturation, by definition, can be determined only during or after data analysis. And most of us need to justify our sample sizes (to funders, ethics committees, etc.) before collecting data!

Until relatively recently, researchers and evaluators had to rely on rules of thumb or their personal experiences to estimate how many qualitative data collection events they needed for a study; empirical data to support these sample sizes were virtually non-existent. This began to change a little over a decade ago. Morgan and colleagues (2002) decided to plot (and publish!) the number of new concepts identified in successive interviews across four datasets. They found that nearly no new concepts were found after 20 interviews. Extrapolating from their data, we see that the first five to six in-depth interviews produced the majority of new data, and approximately 80% to 92% of concepts were identified within the first 10 interviews.

Building on this work, Guest et al. (2006) conducted a systematic inductive thematic analysis of 60 in-depth interviews among female sex workers in West Africa. Of the 114 themes identified in the entire dataset, 80 (70%) turned up in the first six interviews, and 100 themes (92%) were identified within the first 12 interviews (Figure 1). Additionally, those 100 themes comprised 97% of the most common (highest prevalence) themes, indicating that the “big ones” were evident early on.

Figure 1. Number of new codes identified in batches of six individual interviews ( Guest et al., 2006 )

Since Guest et al.’s publication in 2006, other researchers have confirmed that 6-12 interviews seem to be a sweet spot for the number of qualitative interviews needed to reach saturation. We provide the following table as a summary.

Study authorsSaturation definitionFindings
Not defined

The proportion of identified themes at a given point in analysis divided by the total number of themes identified in that analysis

(gated)The point, after conducting 10 interviews, when three additional interviews yield no new themes

(gated)The point at which linking concepts from two consecutive focus groups or individual interviews reveals no additional second-level categories

(gated)The number of interviews required to identify the most common themes in a total of three interviews

The proportion of identified themes at a given point in analysis divided by the total number of themes identified in that analysis

“But what about focus groups?” you ask. An empirically-based study by Coenen et al. (2012) (gated) found that five focus groups were enough to reach saturation for their inductive thematic analysis. In a recent methodological study (gated), we followed a similar approach used by Guest et al. (2006) and monitored thematic discovery and code creation after each of 40 focus groups conducted among African-American men in North Carolina on the topic of health-seeking behavior (more on this study and its methodological findings here ). We found the majority of themes were identified within the first focus group, and nearly all of the important (read most frequently expressed) themes were discovered within the first three focus groups (Figure 2).

Figure 2. Average number of new codes identified per focus group (focus groups randomly ordered) ( Guest et al., 2016 )

These data from our study suggest that a sample size of two to three focus groups will likely capture about 80% of themes on a topic — including those most broadly shared — in a study with a relatively homogeneous population, and using a semi-structured guide. As few as three to six focus groups are likely enough to identify 90% of important themes.

Note that these sample sizes, for both interviews and focus groups, apply per sub-population of interest. Note too that thematic saturation will vary based on a number of factors (keep watch for a future blog post) and sample size should be adjusted accordingly.

Use this catchy poem to remember how many in-depth interviews or focus groups you need.

Sharing is caring!

How likely are you to recommend this blog post to a friend or colleague? Select 0 - Not likely 1 2 3 4 5 6 7 8 9 10 - Highly likely

How satisfied are you with what you found on this blog today? Did not meet my needs Somewhat met my needs Met my needs Exceeded my expectations I did not have any specific needs

Please share any other comments or feedback.

Related Posts

how many respondents should be in qualitative research

Exploring the parameters of “it depends” for estimating the rate of data saturation in qualitative inquiry

how many respondents should be in qualitative research

Learning about focus groups from an RCT

how many respondents should be in qualitative research

Turning lemons into lemonade, and then drinking it: Rigorous evaluation under challenging conditions

Stay up to date

Never miss an email

  • Monthly summaries only

Our use of cookies

Privacy overview.

CookieDurationDescription
cookielawinfo-checkbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
  • Print this article

Review Article

How many participants do we have to include in properly powered experiments a tutorial of power analysis with reference tables.

Given that an effect size of d = .4 is a good first estimate of the smallest effect size of interest in psychological research, we already need over 50 participants for a simple comparison of two within-participants conditions if we want to run a study with 80% power. This is more than current practice. In addition, as soon as a between-groups variable or an interaction is involved, numbers of 100, 200, and even more participants are needed. As long as we do not accept these facts, we will keep on running underpowered studies with unclear results. Addressing the issue requires a change in the way research is evaluated by supervisors, examiners, reviewers, and editors. The present paper describes reference numbers needed for the designs most often used by psychologists, including single-variable between-groups and repeated-measures designs with two and three levels, two-factor designs involving two repeated-measures variables or one between-groups variable and one repeated-measures variable (split-plot design). The numbers are given for the traditional, frequentist analysis with p < .05 and Bayesian analysis with BF > 10. These numbers provide researchers with a standard to determine (and justify) the sample size of an upcoming study. The article also describes how researchers can improve the power of their study by including multiple observations per condition per participant.

  • Page/Article: 16
  • DOI: 10.5334/joc.72
  • Accepted on 22 May 2019
  • Published on 19 Jul 2019
  • Peer Reviewed

Root out friction in every digital experience, super-charge conversion rates, and optimise digital self-service

Uncover insights from any interaction, deliver AI-powered agent coaching, and reduce cost to serve

Increase revenue and loyalty with real-time insights and recommendations delivered straight to teams on the ground

Know how your people feel and empower managers to improve employee engagement, productivity, and retention

Take action in the moments that matter most along the employee journey and drive bottom line growth

Whatever they’re are saying, wherever they’re saying it, know exactly what’s going on with your people

Get faster, richer insights with qual and quant tools that make powerful market research available to everyone

Run concept tests, pricing studies, prototyping + more with fast, powerful studies designed by UX research experts

Track your brand performance 24/7 and act quickly to respond to opportunities and challenges in your market

Meet the operating system for experience management

  • Free Account
  • Product Demos
  • For Digital
  • For Customer Care
  • For Human Resources
  • For Researchers
  • Financial Services
  • All Industries

Popular Use Cases

  • Customer Experience
  • Employee Experience
  • Employee Exit Interviews
  • Net Promoter Score
  • Voice of Customer
  • Customer Success Hub
  • Product Documentation
  • Training & Certification
  • XM Institute
  • Popular Resources
  • Customer Stories
  • Artificial Intelligence

Market Research

  • Partnerships
  • Marketplace

The annual gathering of the experience leaders at the world’s iconic brands building breakthrough business results.

language

  • English/AU & NZ
  • Español/Europa
  • Español/América Latina
  • Português Brasileiro
  • REQUEST DEMO
  • Experience Management
  • Ultimate Guide to Market Research
  • Qualitative vs Quantitative Research

Try Qualtrics for free

Qualitative vs quantitative research.

13 min read You’ll use both quantitative and qualitative research methods to gather survey data. What are they exactly, and how can you best use them to gain the most accurate insights?

What is qualitative research?

Qualitative research  is all about  language, expression, body language and other forms of human communication . That covers words, meanings and understanding. Qualitative research is used to describe WHY. Why do people  feel  the way they do, why do they  act  in a certain way, what  opinions  do they have and what  motivates  them?

Qualitative data is used to understand phenomena – things that happen, situations that exist, and most importantly the meanings associated with them. It can help add a ‘why’ element to factual, objective data.

Qualitative research gives breadth, depth and context to questions, although its linguistic subtleties and subjectivity can mean that results are trickier to analyse than quantitative data.

This qualitative data is called  unstructured data by researchers. This is because it has not traditionally had the type of structure that can be processed by computers, until today. It has, until recently at least, been exclusively accessible to human brains. And although our brains are highly sophisticated, they have limited processing power. What can help analyse this structured data to assist computers and the human brain?

Discover the 2023 Market Research Trends Transforming the industry

What is quantitative research?

Quantitative data refers to numerical information. Quantitative research gathers information that can be counted, measured, or rated numerically – AKA quantitative data. Scores, measurements, financial records, temperature charts and receipts or ledgers are all examples of quantitative data.

Quantitative data is often structured data, because it follows a consistent, predictable pattern that computers and calculating devices are able to process with ease. Humans can process it too, although we are now able to pass it over to machines to process on our behalf. This is partly what has made quantitative data so important historically, and why quantitative data – sometimes called ‘hard data’ – has dominated over qualitative data in fields like business, finance and economics.

It’s easy to ‘crunch the numbers’ of quantitative data and produce results visually in graphs, tables and on data analysis dashboards. Thanks to today’s abundance and accessibility of processing power, combined with our ability to store huge amounts of information, quantitative data has fuelled the Big Data phenomenon, putting quantitative methods and vast amounts of quantitative data at our fingertips.

As we’ve indicated, quantitative and qualitative data are entirely different and mutually exclusive categories. Here are a few of the differences between them.

1. Data collection

Data collection methods for quantitative data and qualitative data vary, but there are also some places where they overlap.

Qualitative data collection methods Quantitative data collection methods
Gathered from focus groups, in-depth interviews, case studies, expert opinion, observation, audio recordings, and can also be collected using surveys. Gathered from surveys, questionnaires, polls, or from secondary sources like census data, reports, records and historical business data.
Uses   and open text survey questions Intended to be as close to objective as possible. Understands the ‘human touch’ only through quantifying the OE data that only this type of research can code.

2. Data analysis

Quantitative data suits statistical analysis techniques like linear regression, T-tests and ANOVA. These are quite easy to automate, and large quantities of quantitative data can be analyzed quickly.

Analyzing qualitative data needs a higher degree of human judgement, since unlike quantitative data, non numerical data of a subjective nature has certain characteristics that inferential statistics can’t perceive. Working at a human scale has historically meant that qualitative data is lower in volume – although it can be richer in insights.

Qualitative data analysis Quantitative data analysis
Results are categorised, summarised and interpreted using human language and perception, as well as logical reasoning Results are analysed mathematically and statistically, without recourse to intuition or personal experience.
Fewer respondents needed, each providing more detail Many respondents needed to achieve a representative result

3. Strengths and weaknesses

When weighing up qualitative vs quantitative research, it’s largely a matter of choosing the method appropriate to your research goals. If you’re in the position of having to choose one method over another, it’s worth knowing the strengths and limitations of each, so that you know what to expect from your results.

Qualitative approach Quantitative approach
Can be used to help formulate a theory to be researched by describing a present phenomenon Can be used to test and confirm a formulated theory
Results typically expressed as text, in a report, presentation or journal article Results expressed as numbers, tables and graphs, relying on numerical data to tell a story.
Less suitable for scientific research More suitable for scientific research and compatible with most standard statistical analysis methods
Harder to replicate, since no two people are the same Easy to replicate, since what is countable can be counted again
Less suitable for sensitive data: respondents may be biased or too familiar with the pro Ideal for sensitive data as it can be anonymized and secured

Qualitative vs quantitative – the role of research questions

How do you know whether you need qualitative or quantitative research techniques? By finding out what kind of data you’re going to be collecting.

You’ll do this as you develop your research question, one of the first steps to any research program. It’s a single sentence that sums up the purpose of your research, who you’re going to gather data from, and what results you’re looking for.

As you formulate your question, you’ll get a sense of the sort of answer you’re working towards, and whether it will be expressed in numerical data or qualitative data.

For example, your research question might be “How often does a poor customer experience cause shoppers to abandon their shopping carts?” – this is a quantitative topic, as you’re looking for numerical values.

Or it might be “What is the emotional impact of a poor customer experience on regular customers in our supermarket?” This is a qualitative topic, concerned with thoughts and feelings and answered in personal, subjective ways that vary between respondents.

Here’s how to evaluate your research question and decide which method to use:

  • Qualitative research:

Use this if your goal is to  understand  something – experiences, problems, ideas.

For example, you may want to understand how poor experiences in a supermarket make your customers feel. You might carry out this research through focus groups or in depth interviews (IDI’s). For a larger scale research method you could start  by surveying supermarket loyalty card holders, asking open text questions, like “How would you describe your experience today?” or “What could be improved about your experience?” This research will provide context and understanding that quantitative research will not.

  • Quantitative research:

Use this if your goal is to  test or confirm  a hypothesis, or to study cause and effect relationships. For example, you want to find out what percentage of your returning customers are happy with the customer experience at your store. You can collect data to answer this via a survey.

For example, you could recruit 1,000 loyalty card holders as participants, asking them, “On a scale of 1-5, how happy are you with our store?” You can then make simple mathematical calculations to find the average score. The larger sample size will help make sure your results aren’t skewed by anomalous data or outliers, so you can draw conclusions with confidence.

Qualitative and quantitative research combined?

Do you always have to choose between qualitative or quantitative data?

Qualitative vs quantitative cluster chart

In some cases you can get the best of both worlds by combining both quantitative and qualitative data.You could use pre quantitative data to understand the landscape of your research. Here you can gain  insights around a topic  and propose a  hypothesis.  Then adopt a quantitative research method to test it out. Here you’ll discover where to focus your survey appropriately or to pre-test your survey, to ensure your questions are understood as you intended. Finally, using a round of qualitative research methods to bring your insights and story to life. This mixed methods approach is becoming increasingly popular with businesses who are looking for in depth insights.

For example, in the supermarket scenario we’ve described, you could start out with a qualitative data collection phase where you use focus groups and conduct interviews with customers. You might find suggestions in your qualitative data that customers would like to be able to buy children’s clothes in the store.

In response, the supermarket might pilot a children’s clothing range. Targeted  quantitative  research could then reveal whether or not those stores selling children’s clothes achieve higher  customer satisfaction  scores  and a  rise in profits  for clothing.

Together, qualitative and quantitative data, combined with statistical analysis, have provided important insights about customer experience, and have proven the effectiveness of a solution to business problems.

Qualitative vs quantitative question types

As we’ve noted, surveys are one of the data collection methods suitable for both quantitative and qualitative research. Depending on the types of questions you choose to include, you can generate qualitative and quantitative data. Here we have summarized some of the survey question types you can use for each purpose.

Qualitative data survey questions

There are fewer survey  question  options for collecting qualitative data, since they all essentially do the same thing – provide the respondent with space to enter information in their own words. Qualitative research is not typically done with surveys alone, and researchers may use a mix of qualitative methods. As well as a survey, they might conduct in depth interviews, use observational studies or hold focus groups.

Open text ‘Other’ box (can be used with multiple choice questions)

Other text field

Text box (space for short written answer)

What is your favourite item on our drinks menu

Essay box (space for longer, more detailed written answers)

Tell us about your last visit to the café

Quantitative data survey questions

These questions will yield quantitative data – i.e. a numerical value.

Net Promoter Score (NPS)

On a scale of 1-10, how likely are you to recommend our café to other people?

Likert Scale

How would you rate the service in our café? Very dissatisfied to Very satisfied

Radio buttons (respondents choose just one option)

Which drink do you buy most often? Coffee, Tea, Hot Chocolate, Cola, Squash

Check boxes (respondents can choose multiple options)

On which days do you visit the cafe? Mon-Saturday

Sliding scale

Using the sliding scale, how much do you agree that we offer excellent service?

Star rating

Please rate the following aspects of our café: Service, Quality of food, Seating comfort, Location

Analysing data (quantitative or qualitative) using technology

We are currently at an exciting point in the history of qualitative analysis. Digital analysis and other methods that were formerly exclusively used for  quantitative data  are now used for interpreting non numerical data too.

Artificial intelligence programs can now be used to analyse open text, and turn qualitative data into structured and semi structured quantitative data that relates to qualitative data topics such as emotion and sentiment, opinion and experience.

Research that in the past would have meant qualitative researchers conducting time-intensive studies using analysis methods like thematic analysis can now be done in a very short space of time. This not only saves time and money, but opens up qualitative data analysis to a much wider range of businesses and organisations.

The most advanced tools can even be used for real-time statistical analysis, forecasting and prediction, making them a powerful asset for businesses.

Qualitative or quantitative – which is better for data analysis?

Historically, quantitative data was much easier to analyse than qualitative data. But as we’ve seen, modern technology is helping qualitative analysis to catch up, making it quicker and less labor-intensive than before.

That means the choice between qualitative and quantitative studies no longer needs to factor in ease of analysis, provided you have the right tools at your disposal. With an integrated platform like Qualtrics, which incorporates data collection, data cleaning, data coding and a powerful suite of analysis tools for both qualitative and quantitative data, you have a wide range of options at your fingertips.

eBook: A guide to building agile research functions in-house

Related resources

Market intelligence 9 min read, qualitative research questions 11 min read, ethnographic research 11 min read, business research methods 12 min read, qualitative research design 12 min read, business research 10 min read, qualitative research interviews 11 min read, request demo.

Ready to learn more about Qualtrics?

  • Open access
  • Published: 21 June 2024

Healthcare providers’ perception of caring for older patients with depression and physical multimorbidity: insights from a focus group study

  • Laura Tops   ORCID: orcid.org/0000-0002-4849-3540 1 ,
  • Mei Lin Cromboom 1 ,
  • Anouk Tans   ORCID: orcid.org/0000-0003-3401-5158 1 ,
  • Mieke Deschodt   ORCID: orcid.org/0000-0003-1560-2277 1 , 2 ,
  • Mathieu Vandenbulcke   ORCID: orcid.org/0000-0001-9765-1499 3 , 4 &
  • Mieke Vermandere   ORCID: orcid.org/0000-0002-0437-6633 1 , 4  

BMC Primary Care volume  25 , Article number:  223 ( 2024 ) Cite this article

56 Accesses

Metrics details

The caretaking process for older adults with depression and physical multimorbidity is complex. Older patients with both psychiatric and physical illnesses require an integrated and comprehensive approach to effectively manage their care. This approach should address common risk factors, acknowledge the bidirectional relationship between somatic and mental health conditions, and integrate treatment strategies for both aspects. Furthermore, active engagement of healthcare providers in shaping new care processes is imperative for achieving sustainable change.

To explore and understand the needs and expectations of healthcare providers (HCPs) concerning the care for older patients with depression and physical multimorbidity.

Seventeen HCPs who work with the target group in primary and residential care participated in three focus group interviews. A constructivist Grounded Theory approach was applied. The results were analyzed using the QUAGOL guide.

Participants highlighted the importance of patient-centeredness, interprofessional collaboration, and shared decision-making in current healthcare practices. There is also a need to further emphasize the advantages and risks of technology in delivering care. Additionally, HCPs working with this target population should possess expertise in both psychiatric and somatic care to provide comprehensive care. Care should be organized proactively, anticipating needs rather than reacting to them. Healthcare providers, including a dedicated care manager, might consider collaborating, integrating their expertise instead of operating in isolation. Lastly, effective communication among HCPs, patients, and their families is crucial to ensure high-quality care delivery.

The findings stress the importance of a comprehensive approach to caring for older adults dealing with depression and physical comorbidity. These insights will fuel the development of an integrated care model that caters to the needs of this population.

Peer Review reports

Introduction

Persons with mental illnesses have a shorter lifespan than the general population, mostly due to physical comorbidities [ 1 ]. Having a mental illness almost doubles the risk of cardiovascular diseases, diabetes, and obesity in comparison to healthy persons [ 2 , 3 ]. Moreover, compared to patients with chronic physical conditions, patients with mental illness also have higher rates of hospitalization and emergency department use [ 4 , 5 ]. Among older individuals with depression, more than two-thirds present at least one somatic illness, and more than half of those with somatic comorbidities have two or more such illnesses [ 6 ]. Furthermore, older people with mental illnesses face the dual stigma of being both a geriatric and psychiatric patient [ 2 ].

Traditional care for older adults with mental illnesses lacks an integrated approach [ 7 ]. The effective management of their care requires a comprehensive approach that addresses common risk factors and the bidirectional relationship between somatic and mental health conditions, and integrates treatment for both [ 3 , 8 , 9 ]. The integration of mental and somatic healthcare is a top priority in national and international policy documents [ 2 , 5 , 6 , 10 , 11 , 12 ].

A recent scoping review identified the intervention components that are commonly used within complex multicomponent care models for older adults dealing with both depression and physical multimorbidity [ 13 ]. Findings indicated that many of these care models share similar elements, such as the use of multidisciplinary teams, care coordinators, considering treatment interactions (e.g., polypharmacy, guideline interaction), continuity of care, individualized care planning, and personalized, holistic assessments with self-management support [ 13 ]. The findings of the review underscore the importance of recognizing the commonalities in intervention components within care models for older adults dealing with depression and physical multimorbidity. This understanding serves as a foundation for the subsequent discussion, which will delve into the practical aspects of implementing such interventions and the significance of stakeholder engagement in shaping their successful execution.

Bridging the gap between research and practice is crucial for the successful development and implementation of new healthcare interventions. Gathering valuable insights and perspectives on current practices from all relevant stakeholders (e.g. patients, informal caregivers, healthcare providers and policy makers) as part of a contextual analysis plays an essential role in ensuring the development of effective interventions aligned with their expertise and preferences. Incorporating implementation science principles further enhances the likelihood of successful adoption by addressing barriers and optimizing the implementation process [ 14 ]. Involving stakeholders in healthcare research also presents certain difficulties. For instance, when diverse individuals with their unique interests come together, it can result in complex situations, particularly when making decisions [ 15 ]. In healthcare research, stakeholder involvement can lead to the accumulation of different viewpoints and perceptions and increased trust and legitimacy among service users [ 16 , 17 ], improving the quality, relevance and impact of health research [ 18 , 19 ]. However, despite the direct influence changes in healthcare policy have on stakeholders, they are not always involved in the decision-making process [ 20 ]. Professionals’ practical experience grants them a deep understanding of specific contexts, allowing them to grasp nuances that may elude outsiders. Healthcare providers play a vital role in the realm of elderly care. In scholarly literature, they are recognized as mediators of context-specific knowledge, serving as invaluable conduits for insights tailored to the needs of older individuals [ 21 , 22 ].

The focus group interviews within this study form an integral component of the context analysis conducted within the framework of the I-CONNECT project. Standing for ‘Integrated care program for home-dwelling older adults with depression and physical multimorbidity,’ I-CONNECT aims to comprehensively address the healthcare needs of this specific demographic. The results of the focus group interviews will fuel the next stages of the development of an integrated care model that caters to the needs of this population. Therefore, the objective of this study is to delve into the perspectives of healthcare providers concerning the provision of care for older adults facing both depression and physical multimorbidity.

Design and setting

Focus groups were the preferred method because of the possibility for interaction between participants. By bringing together individuals with diverse backgrounds and viewpoints, we aimed to create a dynamic environment for exchanging ideas and exploring multiple perspectives on the given topic. The focus group interviews were conducted at the University Psychiatric Centre (UPC-KU Leuven), a Belgian academic psychiatric hospital. The study complied with the Consolidated Criteria for Reporting Qualitative Research (COREQ) [ 23 ].

Participants and recruitment

We conducted focus group interviews with HCPs who engage in regular professional interactions with older adults experiencing depression and physical comorbidities. To recruit participants, HCPs working in primary (e.g. home nursing, GP practice) and residential care (e.g. psychiatric hospital), and who have professional interactions with the target group, were contacted via e-mail or telephone and informed of the study’s aim. Flyers were also disseminated at strategic locations such as hospitals, doctor’s offices and pharmacies. Participants were given the opportunity to choose between an online or in-person format.

Eligibility criteria

Eligibility criteria were established to identify suitable participants in both residential and primary care settings. Within the residential setting, eligible participants included professionals holding the following professions: geriatric psychiatrist, geriatrician, nurse, and social worker. In the primary care setting, eligible participants encompassed general practitioners, psychologists, physiotherapists, pharmacists, and home care providers (e.g., domestic services, home nursing).

To be included in the study, participants had to be employed at the academic psychiatric hospital UPC KU Leuven or within the primary care vicinity of UPC KU Leuven. Participants were expected to have frequent professional interactions with patients aged 65 and above who presented with psychiatric and physical conditions. Proficiency in understanding and speaking the Dutch language was a prerequisite for inclusion.

We aimed for a focus group size of minimum six and maximum ten participants [ 24 ], which allowed everyone to share their opinion and also yield enough diverse information. Moreover, by not including too many participants, we created a safe environment where everyone was comfortable enough to express themselves freely [ 25 ].The researchers used a maximum variation purposive sampling based on gender, profession, and working experience to recruit participants, following the principles of Patton et al. [ 26 ]. Participants signed the informed consent form in duplicate and received a voucher of 25 euros after completing the focus group discussion.

Data collection

The focus group interviews took place in the months of November 2022, December 2022 and March 2023. Three focus group interviews were conducted, of which two in person and one online session. The time span of each focus group was approximately one and a half hour. All focus groups were audio-recorded with the consent of the participants. The online focus group session was also video-recorded.

The focus groups were led by an experienced external moderator (AT), who was a member of the research team but held no affiliation with the psychiatric hospital. A semi-structured topic guide was used during the focus group interviews (Annex I). The topic guide was created based on an earlier literature review [ 27 ]. First, the moderator commenced the session by informing the participants of the underlying purpose behind the research. She additionally provided comprehensive insights into her professional background, thereby establishing her expertise in the field. Questions were asked about participants’ perceptions of the current care and their perspective on the future care for older adults with depression and physical multimorbidity. The moderator ensured that all voices were heard and that the discussion did not deviate much from the topic [ 28 , 29 ]. Participants were also prompted to reflect on their own perspectives, facilitating a more comprehensive understanding of the data. Throughout the focus group discussions, the moderator posed supplementary questions designed to elicit participants’ viewpoints. This approach ensured that participants not only shared their ideas but also provided the rationale behind their viewpoints [ 30 ]. Two observers (LT & MC) were present to take notes on the progress of the conversation and on non-verbal communication. These notes were integrated into the result section.

Data analysis

We used the constructivist Grounded theory approach introduced by Charmaz [ 28 ] to gain a better understanding of healthcare providers’ (HCPs) perceptions of the care provided for older adults with depression and physical multimorbidity. Charmaz’s constructivist grounded theory aims to understand social phenomena and subjective experiences. By actively engaging with participants, iteratively analyzing data, and reflecting on our own biases, we can generate insights grounded in the perspectives of the HCPs. Inclusivity of diverse voices allows us to capture the complexity of participants’ experiences within their social contexts, contributing to a more comprehensive understanding of the phenomena under investigation [ 31 ].

Conversations were typed out verbatim. Participants were pseudo-anonymized in the transcripts by assigning them numbers. Two researchers (LT & MC) carried out the analysis by means of the Qualitative Analysis Guide of Leuven (QUAGOL) [ 32 ], a practical guide rooted in the constant comparative method of the Grounded Theory Approach [ 24 ]. The QUAGOL method guides the researcher to a comprehensive view of the qualitative interview data. The first part of the method is described as ‘paper and pencil work’, which constitutes the preparatory stage before the coding process. During this stage, researchers thoroughly review the transcripts, craft narrative reports, and endeavor to formulate concepts and, ultimately, a conceptual framework from the data [ 32 ].

The second part consists of the actual coding through the use of dedicated software [ 32 ]. Two researchers (LT & MC) independently coded the data with ATLAS.ti Web software. LT and MC carefully analyzed the interview transcripts, identifying important concepts related to the care of older adults with depression and physical health issues. MV then reviewed and, if needed, refined the initial themes to ensure a thorough analysis of the data.

Participants

The focus groups in this study comprised 4 to 8 participants each, with a total of 17 healthcare providers taking part. The first focus group was composed of a heterogeneous group of HCPs, while the second and third focus group interviews had less heterogeneous profiles. In Table  1 , the gender distribution of all participants shows that the majority were female, comprising 65% of the total. During the first focus group discussion, one participant was absent (reason not reported), resulting in a total of eight instead of the intended nine participants.

Healthcare providers’ perceptions

Throughout the focus group interviews, participants shared insights on various subjects, including patient-centeredness, interprofessional collaboration, shared decision-making, technology, capacity building, proactive care, and effective communication. Each of these topics will be examined in depth in the subsequent sections.

Patient-centeredness

Participants emphasized the importance of individualized care tailored to the unique needs and living situation of each patient. They highlighted the need to identify and address aspects of care that can be adjusted to improve patients’ quality of life.

Also looking from the perspective of the patient as much as possible, like hearing how it’s going, how they’re experiencing it. If it’s still possible, continuing to give as much control as possible to the patient (Focus group 2, participant 2).

Many of the participants felt that it is crucial for patients to maintain control over their own care process for as long as possible. They highlighted the role of the environment in enabling patients to stay in control in their own surroundings. The participants also stressed the importance of keeping patients well-informed about available care options to facilitate good decision-making.

I often also find it important that patients are well informed, that they are able to make informed decisions, weigh the options, and that you then work together towards a goal and preferably in consultation with the system as much as possible, whatever that system may be. And that can also be good neighbors or other involved parties. So I think that network part is also really important (Focus group 1, participant 5).

According to some participants, striking a balance between the patient’s preferences and the necessary medical interventions is challenging. Furthermore, one participant underscored that patients’ capacity to manage their condition evolves with the stage of the illness. For instance, individuals in remission from depression may exhibit different control dynamics compared to those in the acute phase of the condition.

To what extent are you going to acknowledge and follow the wishes of a depressed patient. And to what extent are you going to push good care, that we consider good care. That’s really difficult (Focus group 1, participant 8). I don’t think you can expect a patient who has major depression to actively take control of their own care (Focus group 1, participant 5).

Interprofessional collaboration

Participants value teamwork among different healthcare providers when dealing with complex patients. Some participants suggest that this could result in better continuity of care.

Collaboration between partners to work on continuity of care, that’s also a challenge but is part of good care (Focus group 1, participant 6).

Several caregivers suggested that interdisciplinary patient meetings provide an effective forum for collaborating with all stakeholders involved in a patient’s care. These meetings allow for the planning of an ideal course of care and provide an opportunity to discuss and assign responsibilities, as well as to evaluate what is achievable for everyone involved.

Care consultations with family members and possibly those who already, if home nursing care comes to the home, to gather them around the table and to just hear how it’s going, how is everyone’s capacity, what is needed to get that clear (Focus group 2, participant 2).

According to the participants, the collaboration among different healthcare settings can be improved. More emphasis could be placed on holistic care, where somatic and psychiatric conditions are treated together. To that end, healthcare providers across different settings chould be encouraged to work collaboratively in order to enhance the quality of patient care.

And there’s such a gap between them and they need to come together. And I find, I think that we can best offer complete care, total care if we can unite those two (Focus group 2, participant 1).

Several participants proposed the idea of a “coordinator” or “responsible caregiver” as a potential solution to enhance continuity of care and address the issues related to care coordination.

Because we notice that there are very often problems with care coordination. That people often come by the house, but that no one has really thought about how they relate to each other and that sometimes someone else has to come along to get the job done (Focus group 2, participant 2). Yes, it would be much better if the nurse, whose patient is going to the short-stay center, that they can remain the nurse in charge, to be the intermediary instead of us having to turn to another organization to temporarily take over (Focus group 1, participant 7).

Effective communication

According to the participants, there is still potential for improvement in the area of communication. To enhance clarity regarding care tasks and time schedules, communication must be improved not only among healthcare providers but also between healthcare providers and patients/families. As previously discussed, implementing a shared communication channel has the potential to enhance communication among all stakeholders involved.

P4: Yes, the communication between the various care providers, both specialists and other care providers. So that the multidisciplinarity, that it can improve (Focus group 3, participant 4). We’ve already had situations and that’s mainly about who’s washing the patient uhm, is that the home nurse, is that the family help, who is taking up the care. That is frequent, that is something that occurs very often and that is then sometimes lost sight of because one person thinks that the other is doing that (Focus group 2, participant 2). P3: Also maybe not having a channel. P2: [No channel] P1: [Not really knowing, I think] P2: [Yes] P3: [I once witnessed someone who had a sort of notebook and so then the one caregiver writes in the notebook and indeed then the next one comes another day and can then see aha yes that’s what happened (Focus group 2).

Shared decision-making

Healthcare providers agree that developing an appropriate care plan requires coordination between the patient, their network, and caregivers, Where all parties’ wishes and opinions are considered as much as possible. According to some participants, involving family members in care consultations can be highly beneficial as they can provide valuable insights into the patient’s situation. Healthcare providers also emphasize the importance of understanding the patient’s home situation to ensure better care.

That you then work together towards a goal and preferably in consultation with the system as much as possible (Focus group 1, participant 5). When admitted, there is always a system discussion and with elderly patients we try to make sure that an involved party is present as much as possible, a partner but certainly also children. Because we also know that we need them in that story (Focus group 2, participant 3).

Integrating technology in patient care

Healthcare providers agree that integrating eHealth can benefit the future of patient care. Participants provided specific examples such as digital shared medical files, tablets, automatic pill dispensers, exercise robots, and video consultations. While many healthcare providers recognize the potential benefits of eHealth and digitization (e.g. time effectiveness), significant improvements are still needed to ensure proper functioning and efficiency. Participants remarked that some older generations may struggle to keep up with changing technologies, which can hinder progress in this field.

P2: There is still room for improvement in file management. M: [Yes? ] P2: Especially in opening the file because everyone works with a different file management system (Focus group 2). I think there are two sides to that because I’ve noticed that many elderly people are being left behind because they can’t keep up with the technology and aren’t able to request certain things that they are entitled to (Focus group 2, participant 2). Video or consultations by video call, I won’t say are an equal alternative but can be complementary in treatment or a follow-up or uhm a care pathway in any case. I think that that could come more in the future or could be installed more (Focus group 1, participant 6).

Various caregivers emphasize the need to be vigilant about the dangers of healthcare technology. For example, they believe it is important to update the digital record as if the patient is reading along. They also highlight the importance of maintaining human contact despite increasing digitalization.

That’s why there are more and more calls to write your reports with the knowledge that the patient is reading along (Focus group 1, participant 8). I definitely think that that [eHealth] can be implemented more frequently in the future. But we do need to keep focusing on human contact (Focus group 2, participant 1).

Pro-active care

During the discussion, some participants highlighted the need for a greater emphasis on preventive care measures. They observed that current medical interventions are reactive, only taken when problems have already arisen or when conditions have deteriorated, leaving patients in a more critical state. To address this issue, they suggested that more attention could be given to early care planning, which could help prevent the need for more drastic or specialized interventions later on.

While I sometimes think, if they would do that quicker, make that threshold a bit lower, that the response can be faster and that depression can also be resolved quicker, easier. Whether that’s the case, I don’t know of course, that’s my feeling (Focus group 1, participant 8). P2: Actually, that healthcare proxy is already a good start to arrange everything in advance. That could easily be highlighted a bit more. M: [Yes, could be emphasized] P1: [So the preventive aspect, right] P2: That you no longer have to decide for the person, I hope. P4: [That they can decide for themselves] (Focus group 2). Sometimes letting it drag on a bit too long, after which a sort of crisis arises or sort of, or deteriorating even further so that even more specialized care is then necessary (Focus group 1, participant 8).

Capacity building

Some respondents noted a concerning lack of knowledge among healthcare providers. Specifically, they mentioned that some HCPs seem to be unaware of how to effectively treat patients with somatic and psychiatric concerns, leading them to refer these patients to other healthcare providers. Enhancing the provision of specific training to HCPs regarding psychiatric and somatic illnesses can offer a promising solution.

P1: Yes, geriatric departments are like “yes, that is a psychiatric patient” and then. P2: [and then they come to us. And then we think, our nurses say we can’t handle that] (Focus group 2). So uhm yes, what I also want for the future, in my view, is to give the staff some more training, to give them some more guidance (Focus group 2, participant 4).

One key point was the challenge of sharing knowledge effectively within organizations, underscoring the need for improved dissemination strategies. Additionally, the importance of allocating more resources and time for thoughtful decision-making in caregiving settings was emphasized, highlighting the human-centric nature of the work. Furthermore, the focus group interviews acknowledged the multifaceted challenges in caregiving, such as staffing shortages and resource constraints, demonstrating the need for enhanced support and resource allocation within the field.

Many organizations work with coordinators and such and the coordinators do have knowledge and disseminate it among their caregivers, for example, but that the people on the floor don’t (Focus group 1, participant 5). Are there any specific growth opportunities for you in your department? [M] (…) More thorough, more people. That you can actually work in a more focused way and don’t have to make a decision too quickly or can tackle things more thoroughly. I mean, you’re working with people and not with things (Focus group 2, participant 1). But when I go there and I see that there is understaffing, I also understand that they say: We’re already short of hands, do we now have to go spend an extra week in training, so I understand that as well. And then we run into the fact that there is a shortage in various areas I think, in terms of staff, time, finances (Focus group 1, participant 8).

Our findings based on the three focus group interviews demonstrate that placing patients at the core of the care process and empowering them to retain control over their own care for as long as possible is crucial. It is imperative for healthcare providers to collaborate effectively to elevate the quality of patient care. Furthermore, it could be beneficial for patients and families to be regarded as equal partners in the decision-making process. Participants highlighted several areas where improvements can be made. Technological features (e.g. digital shared medical files, tablets, automatic pill dispensers, exercise robots, and video consultations) can play a vital role in enhancing the efficiency of care processes, making them more time-efficient. Care could also consider shifting towards a more proactive approach, rather than solely relying on reactive measures. Additionally, the participants conveyed a shared belief in the potential benefits of optimal care coordination facilitated by a dedicated care manager. To enhance the delivery of high-quality care, it may be advisable for healthcare providers to undergo comprehensive training covering both psychiatric and somatic domains. Finally, to increase clarity regarding care tasks and time schedules, it is essential to enhance communication not only among healthcare providers but also between healthcare providers and patients/families.

Participants emphasized the importance of patient-centered care and shared decision-making (SDM). Encouraging the active participation of older depressed patients has been proven to improve their adherence to psychotherapeutic interventions [ 33 ]. Moreover, SDM can lead to higher levels of patient satisfaction and increased feelings of autonomy and empowerment [ 34 ]. Participants additionally stressed the importance of involving family in decision-making processes. According to the SELFIE framework for multimorbidity, engaging informal caregivers in shared-decision making is a critical aspect of integrated care programs [ 35 ]. Nevertheless, involving informal caregivers in shared decision-making is not yet a common practice in healthcare. Although informal caregivers are sometimes asked for their opinion, they are often not included in decision-making processes alongside the patient and healthcare providers [ 36 ]. Moreover, there is a lack of evidence in how to successfully implement SDM in healthcare settings [ 37 , 38 ]. In the future, researchers should acknowledge the vital role that shared decision-making plays in this context and aim to make it a fundamental part of integrated care models. Furthermore, researchers should actively engage patients in research endeavors and seek to understand their perspectives on concepts such as ‘patient-centeredness’ and ‘effective communication.

Participants emphasized the role of multidisciplinary care in managing mental and physical comorbidity. Integrated care is important for effectively managing complex health conditions that involve both mental and physical illnesses. This approach recognizes that these illnesses are interconnected and require coordinated attention from multiple care providers who communicate and collaborate effectively. Achieving integrated care requires a shift in our approach to service delivery, management, and funding, with a focus on the person rather than the disease. This aligns with current national and international policies to integrate mental and physical health care [ 2 , 5 , 12 ]. Additionally, to provide optimal care for older adults with depression and physical multimorbidity, healthcare providers should possess expertise in both psychiatric and somatic domains [ 39 , 40 , 41 ], as emphasized by the participants in the focus group sessions. Alongside specific knowledge, effective knowledge sharing among healthcare providers also proved to be a crucial aspect in the focus group interviews. Future integrated care models must recognize the intricate interplay between mental and physical health conditions. Healthcare providers involved in these interventions could benefit from undergoing comprehensive training covering both somatic and psychiatric domains to better address the needs of this specific population. Staff members, such as chief nurses, might consider undergoing training to enhance their ability to effectively impart knowledge to other personnel. However, it’s essential to acknowledge and address implementation barriers such as resource and time constraints, as well as workload and staffing issues, to ensure the successful adoption of such training initiatives.

During the discussion, the concept of a “coordinator” or a “responsible caregiver” was introduced as a promising approach to improving the continuity of care and tackling care coordination challenges. Case management in primary care can be more effective if its focus is on enhancing the capabilities and perceived social support of the beneficiaries [ 42 ]. As such, there is uncertainty about whether case management improves patient and service outcomes or reduces costs [ 43 ]. Future research should focus on understanding what works in case management interventions, who benefits from them, and how they can be more effective.

Technological advancements in mental health care have the potential to empower patients and promote greater autonomy in managing their mental health. Concrete examples of such advancements include online psychological interventions and remote monitoring of patients’ progress [ 44 , 45 ]. In certain situations, the use of technology-facilitated healthcare can result in an improved quality of life, decreased feelings of isolation, and strengthened social networks [ 46 ]. Nonetheless, healthcare providers must consider the obstacles that may impede the implementation of eHealth among the older population. A recent review explored the barriers and facilitators of the use of technology-facilitated health care (eHealth) in older adults [ 47 ]. These barriers can include, for instance, a lack of experience or proficiency with eHealth or technology [ 48 , 49 , 50 , 51 ], a lack of confidence in using eHealth solutions [ 52 ], and limitations related to aging [ 47 ]. Throughout the course of the present study, participants highlighted the advantages offered by eHealth, while also acknowledging potential challenges that may arise, such as ensuring privacy protection, preserving personal connections, and addressing accessibility issues for older individuals with regards to technology. To ensure the delivery of high-quality care, future integrated care interventions could explore the potential of technological advancements, such as video consultations and shared communication platforms, while considering the unique vulnerabilities of older adults.

In our study, we adopted an inductive approach, allowing themes to organically surface from the data. Nevertheless, we also contemplate the potential merits of employing a deductive methodology, such as employing established frameworks like the Consolidated Framework for Implementation Research (CFIR) to discern prevalent barriers and facilitators within care processes. Subsequent investigations could delve into these avenues for additional insights [ 53 ].These findings of this study contribute to the existing literature by examining the perspectives of healthcare providers on the provision of care for older adults with depression and physical multimorbidity. Focus group interviews were an optimal choice for qualitative research due to the valuable group dynamics and interactions they facilitated [ 54 ]. However, the study also has several limitations. Firstly, the number of participants varied significantly between the three focus groups, with eight participants in the first group, and only four and five participants in the second and third groups, respectively. This may have resulted in less diverse perspectives and answers in the smaller groups. Unequal group sizes can influence the dynamics within the focus group. Larger groups may dominate the discussion, silencing quieter participants and hindering diverse viewpoints. Conversely, smaller groups may lack diversity and limit the depth of discussion. Additionally, although we attempted to include participants with heterogeneous profiles, the first focus group consisted solely of residential healthcare providers, whereas the second and third group included HCPs from the primary care environment. This may have influenced the dynamics and outcomes of the focus groups. Furthermore, while the use of an online format for the third focus group discussion provided flexibility, opinions on online focus groups vary and this format may have affected the quality of data collected. Finally, it’s worth noting that the demographic information we collected from participants was somewhat limited, focusing solely on their gender and profession. It could be beneficial to gather additional details, such as years of experience, to explore potential variations in perceptions, particularly between healthcare providers who are at the beginning of their careers and those with more experience.

In conclusion, improving care for older adults dealing with depression and multimorbidity requires a significant shift. Placing the patient at the center of the care process and empowering them to take responsibility for their own care for as long as possible is crucial to achieving desirable healthcare outcomes. Collaborative efforts among diverse healthcare providers, facilitated by a dedicated care coordinator, are essential. Additionally, the focus groups emphasized the importance of involving patients and family members in care decisions. Integrating technological features, such as digital shared medical files, tablets, automatic pill dispensers, exercise robots, and video consultations, can significantly improve the efficiency and timeliness of care processes. Furthermore, it may be beneficial for healthcare providers to receive comprehensive training in both somatic and psychiatric domains to effectively address the needs of this specific population, including training for staff members like chief nurses in knowledge sharing. There is a pressing need for improvement in communication, particularly among healthcare providers and between healthcare providers and patients/families, particularly with a view to enhancing clarity regarding care tasks and time schedules. By integrating these enhancements into future care models, we can ensure comprehensive and holistic care that addresses the unique needs of older adults with depression and physical multimorbidity.

Annex I: Semi-structured topic guide.

Patient persona (poster) .

What is the current state of care for Antoon?

What are the key areas of concern for Antoon? E.g. medication interactions, fall prevention, adapted nutrition.

What are your experiences with providing care for these patients?

According to you, what is needed to deliver quality care to this target group? E.g. involving caregivers/family, evidence-based practice, etc.

What aspects are going well?

Are there any areas that need improvement?

How would you describe the core values of care as currently organized? E.g. multidisciplinary care, shared decision making, person-centered, empathetic, etc.

How would you shape the future of care?

What areas do you see as having potential for growth?

What factors can contribute to better healthcare delivery?

What can you do yourselves?

What is the role of patients and their family/caregivers? Describe the ideal caregiver from your perspective.

What is the role of the healthcare provider? How does the role of one provider differ from another?

What are the core values or key issues that should be addressed? E.g. multidisciplinary care, shared decision making, person-centered, empathetic, eHealth, person-centered care, continuity of care, self-management, proactive care, etc.

Data availability

The data used and analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

Consolidated Criteria for Reporting Qualitative Research

healthcare providers

Integrated care program for home-dwelling older adults with depression and physical multimorbidity

Qualitative Analysis Guide of Leuven

University Psychiatric Centre

De Hert M, Correll CU, Bobes J, Cetkovich-Bakmas M, Cohen D, Asai I et al. Physical illness in patients with severe mental disorders. I. Prevalence, impact of medications and disparities in health care. World Psychiatry [Internet]. 2011 [cited 2021 Sep 29];10(1):52. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3048500/ .

Jespers V, Christiaens W, Kohn L, Savoye I, Mistiaen P. Somatische zorg in een psychiatrische setting - Synthese [Somatic care in a psychiatric setting - synthesis] [Internet]. www.kce.fgov.be .

De Hert M, Cohen D, Bobes J, Cetkovich-Bakmas M, Leucht S, Ndetei DM et al. Physical illness in patients with severe mental disorders. II. Barriers to care, monitoring and treatment guidelines, plus recommendations at the system and individual level. World Psychiatry [Internet]. 2011 [cited 2021 Sep 15];10(2):138–51. https://pubmed.ncbi.nlm.nih.gov/21633691/ .

Chang ET, Vinzon M, Cohen AN, Young AS. Effective Models Urgently Needed to Improve Physical Care for People With Serious Mental Illnesses [Internet]. Vol. 12, Health Services Insights. SAGE Publications Ltd; 2019 [cited 2024 Jan 26]. https://doi.org/10.1177/1178632919837628 .

OECD. A New Benchmark for Mental Health Systems. 2021 [cited 2021 Oct 7]; https://doi.org/10.1787/4ed890f6-en .

Firth J, Siddiqi N, Koyanagi A, Siskind D, Rosenbaum S, Galletly C et al. The Lancet Psychiatry Commission The Lancet Psychiatry Commission: a blueprint for protecting physical health in people with mental illness The Lancet Psychiatry Commission Part 1: Physical health disparities for people with mental illness. Lancet Psychiatry [Internet]. 2019 [cited 2021 Sep 15];6:675–712. https://doi.org/10.1016/S2215-0366(19)30132-4 .

Bartels SJ, Naslund JA. The Underside of the Silver Tsunami — Older Adults and Mental Health Care. https://doi.org/101056/NEJMp1211456 [Internet]. 2013 [cited 2021 Oct 6];368(6):493–6. https://www.nejm.org/doi/full/ https://doi.org/10.1056/nejmp1211456 .

Das P, Naylor C, Majeed A. Bringing together physical and mental health within primary care: a new frontier for integrated care: http://dx.doi.org/101177/0141076816665270 [Internet]. 2016 [cited 2021 Oct 6];109(10):364–6. https://journals.sagepub.com/doi/full/10.1177/0141076816665270 .

Walrave R, Beerten SG, Mamouris P, Coteur K, van Nuland M, van Pottelbergh G et al. Trends in the epidemiology of depression and comorbidities from 2000 to 2019 in Belgium. BMC Primary Care 2022 23:1 [Internet]. 2022 [cited 2022 Jun 30];23(1):1–12. https://bmcprimcare.biomedcentral.com/articles/ https://doi.org/10.1186/s12875-022-01769-w .

de Lepeleire J, Smit D, Hill L, Walton I. EUROPEAN FORUM PRIMARY CARE Time for change, now more than ever! 2020.

RIZIV. Meerjarig begrotingstraject voor de verzekering voor geneeskundige verzorging [Multi-year budget trajectory for health insurance] [Internet]. 2021 [cited 2021 Nov 16]. https://www.riziv.fgov.be/SiteCollectionDocuments/meerjarig_begrotingstraject_verzekering_geneeskundige_verzorging_2022_2024.pdf .

The WHO Special Initiative for Mental Health. (2019–2023): universal health coverage for mental health. [Internet]. 2019 [cited 2021 Nov 16]. https://eupha.org/repository/EUPHW/Resources/The_WHO_Special_Initiative_for_Mental_Health_2019-2023.pdf .

Tops L, Beerten SG, Vermandere M. Integrated care models for older adults with depression and physical multimorbidity: a scoping review [Internet]. 2022 [cited 2022 May 3]. https://osf.io/t94d3/ .

Tritter JQ, McCallum A. The snakes and ladders of user involvement: moving beyond Arnstein. Health Policy (New York). 2006;76(2):156–68.

Article   Google Scholar  

Alderson H, Kaner E, O’donnell A, Bate A. A Qualitative Exploration of Stakeholder Involvement in Decision-Making for Alcohol Treatment and Prevention Services. International Journal of Environmental Research and Public Health. 2022;19:2148 [Internet]. 2022 [cited 2023 Jan 5];19(4):2148. https://www.mdpi.com/1660-4601/19/4/2148/htm .

Stoker G. Public Value Management. http://dx.doi.org.kuleuven.e-bronnen.be/101177/0275074005282583 [Internet]. 2016 [cited 2023 Jan 5];36(1):41–57. https://journals-sagepub-com.kuleuven.e-bronnen.be/doi/abs/10.1177/0275074005282583 .

McMurray R. Our reforms, our partnerships, same problems: The chronic case of the English NHS. Public Money and Management [Internet]. 2007 [cited 2023 Jan 5];27(1):77–82. https://www.tandfonline.com/action/journalInformation?journalCode=rpmm20 .

Kreis J, Puhan MA, Schünemann HJ, Dickersin K. Consumer involvement in systematic reviews of comparative effectiveness research. Health Expectations [Internet]. 2013 [cited 2023 Jan 5];16(4):323–37. https://onlinelibrary-wiley-com.kuleuven.e-bronnen.be/doi/full/ https://doi.org/10.1111/j.1369-7625.2011.00722.x .

Faulkner A. Exploring the impact of public involvement on the quality of research: examples exploring the impact of public involvement on the quality of research: examples contents. 2013 [cited 2023 Jan 5]; Available from: www.invo.org.uk/invonet/about-invonet/ .

Klüver L, Nielsen RO, Jorgensen ML. Policy-Oriented Technology Assessment Across Europe. 2016.

Bullock A, Morris ZS, Atwell C. Collaboration between Health Services Managers and Researchers: Making a Difference? http://dx.doi.org.kuleuven.e-bronnen.be/101258/jhsrp2011011099 , Mar. 20];17(SUPPL. 2):2–10. https://journals-sagepub-com.kuleuven.e-bronnen.be/doi/ https://doi.org/10.1258/jhsrp.2011.011099 .

Pentland D, Forsyth K, Maciver D, Walsh M, Murray R, Irvine L et al. Key characteristics of knowledge transfer and exchange in healthcare: integrative literature review. J Adv Nurs [Internet]. 2011 [cited 2024 Mar 20];67(7):1408–25. https://onlinelibrary.wiley.com/doi/full/ https://doi.org/10.1111/j.1365-2648.2011.05631.x .

Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. International Journal for Quality in Health Care [Internet]. 2007 [cited 2023 May 25];19(6):349–57. https://academic-oup-com.kuleuven.e-bronnen.be/intqhc/article/19/6/349/1791966 .

Corbin J, Strauss A. Basics of Qualitative Research (3rd ed.): Techniques and Procedures for Developing Grounded Theory. Basics of Qualitative Research (3rd ed): Techniques and Procedures for Developing Grounded Theory. 2012.

Onwuegbuzie AJ, Dickinson WB, Leech NL, Zoran AG. A Qualitative Framework for Collecting and Analyzing Data in Focus Group Research. 2009 [cited 2022 Mar 31];8(3):1–21. https://journals.sagepub.com/doi/full/10.1177/160940690900800301 .

Patton MQ. Qualitative research and evaluation methods: theory and practice. Inc: SAGE; 2015. p. 832.

Google Scholar  

Tops L, Gabriël S, Mathieu B, Mieke V, Deschodt VM, Vermandere M. Integrated Care Models for Older Adults with Depression and Physical Comorbidity: A Scoping Review. Int J Integr Care [Internet]. 2024 [cited 2024 Jan 10];24(1):1. https://doi.org/10.5334/ijic.7576 .

Savin-Baden M, Major CH. Qualitative research: the essential guide to theory and practice. 2012 [cited 2021 Nov 9];569. https://books.google.com/books/about/Qualitative_Research.html?hl=nl&id=288XkgEACAAJ

Stewart DW, Shamdasani PN. SAGE. 2015 [cited 2021 Nov 8]. Focus Groups: Theory and Practice - David W. Stewart, Prem N. Shamdasani - Google Boeken. Available from: https://books.google.be/books?hl=nl&lr=&id=1svuAwAAQBAJ&oi=fnd&pg=PP1&ots=K6G9GM5yZE&sig=ctzduxpFJbBVqewSoLtMLNDobWk&redir_esc=yv=onepageqf=false

Naturalistic Inquiry - Yvonna S. Lincoln, Egon G. Guba, Egon G. Guba 19.-2008 - Google Books [Internet]. [cited 2022 May 11]. Available from: https://books.google.be/books?hl=enlr=id=2oA9aWlNeooCoi=fndpg=PA7ots=0uovSbR6upsig=XfYNolpLZ5Q2uNFQKwZhzklvs9gredir_esc=y#v=onepageqf=false

Constructing Grounded Theory: A Practical Guide through Qualitative Analysis - Kathy Charmaz - Google Books [Internet]. [cited 2022 May 11]. Available from: https://books.google.be/books?hl=enlr=id=2ThdBAAAQBAJoi=fndpg=PP1ots=f_nT6KmHC_sig=7TAySU13rMICwQ1BYh1wHWKdp0gredir_esc=y#v=onepageqf=false

Dierckx de Casterle B, Gastmans C, Bryon E, Denier Y. QUAGOL: A guide for qualitative data analysis. Int J Nurs Stud [Internet]. 2012 [cited 2024 Jan 26];49(3):360–71. https://doi.org/10.1016/j.ijnurstu.2011.09.012 .

Raue PJ, Schulberg HC, Bruce ML, Banerjee S, Artis A, Espejo M, Effectiveness of shared decision-making for elderly depressed minority primary care patients. Am J Geriatr Psychiatry [Internet]. 2019 [cited 2023 May 3];27(8):883. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6646064/ .

Beitinger R, Kissling W, Hamann J. Trends and perspectives of shared decision-making in schizophrenia and related disorders. Curr Opin Psychiatry [Internet]. 2014 [cited 2023 May 4];27(3):222–9. https://journals-lww-com.kuleuven.e-bronnen.be/co-psychiatry/Fulltext/2014/05000/Trends_and_perspectives_of_shared_decision_making.11.aspx .

Leijten FRM, Struckmann V, van Ginneken E, Czypionka T, Kraus M, Reiss M et al. The SELFIE framework for integrated care for multi-morbidity: Development and description. Health Policy (New York) [Internet]. 2018 [cited 2022 Nov 15];122(1):12–22. https://doi.org/10.1016/j.healthpol.2017.06.002 .

Hamann J, Heres S. Why and how Family caregivers should participate in Shared decision making in Mental Health OPEN FORUM. Psychiatric Serv. 2019;70(5):418–21.

Bunn F, Goodman C, Manthorpe J, Durand MA, Hodkinson I, Rait G et al. Supporting shared decision-making for older people with multiple health and social care needs: a protocol for a realist synthesis to inform integrated care models. BMJ Open [Internet]. 2017 [cited 2024 Mar 11];7(2):e014026. https://bmjopen.bmj.com/content/7/2/e014026 .

Elwyn G, Scholl I, Tietbohl C, Mann M, Edwards AG, Clay C, et al. Many miles to go… a systematic review of the implementation of patient decision support interventions into routine clinical practice. BMC Med Inform Decis Mak [Internet]. 2013 [cited 2024 Mar 11];13(Suppl 2):S14. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4044318/ .

Alberque C, Gex-Fabry M, Whitaker-Clinch B, Eytan A. The five-year evolution of a mixed Psychiatric and somatic care unit: a European experience. Psychosomatics. 2009;50(4):354–61.

Article   PubMed   Google Scholar  

Sunderji N, Ion A, Huynh D, Benassi P, Ghavam-Rassoul A, Carvalhal A. Advancing Integrated Care through Psychiatric Workforce Development: A Systematic Review of Educational Interventions to Train Psychiatrists in Integrated Care. Canadian Journal of Psychiatry [Internet]. 2018 [cited 2024 Mar 11];63(8):513–25. https://journals-sagepub-com.kuleuven.e-bronnen.be/doi/full/ https://doi.org/10.1177/0706743718772520 .

Michielsen L, Bischoff EWMA, Schermer T, Laurant M. Primary healthcare competencies needed in the management of person-centred integrated care for chronic illness and multimorbidity: Results of a scoping review. BMC Primary Care [Internet]. 2023 [cited 2024 Mar 11];24(1):1–13. https://link.springer.com/articles/ https://doi.org/10.1186/s12875-023-02050-4 .

Durme T, Van, Schmitz O, Cès S, Lambert AS, Billings J, Anthierens S et al. Why Is Case Management Effective? A Realist Evaluation of Case Management for Frail, Community-Dwelling Older People: Lessons Learned from Belgium. Open J Nurs [Internet]. 2016 [cited 2023 Sep 22];6(10):863–80. http://www.scirp.org/journal/PaperInformation.aspx?PaperID=71621 .

Sadler E, Khadjesari Z, Ziemann A, Sheehan KJ, Whitney J, Wilson D et al. Case management for integrated care of older people with frailty in community settings. Cochrane Database of Systematic Reviews [Internet]. 2023 [cited 2023 Sep 22];2023(5). https://www.cochranelibrary.com/cdsr/doi/ https://doi.org/10.1002/14651858.CD013088.pub2/full .

Hollis C, Morriss R, Martin J, Amani S, Cotton R, Denis M et al. Technological innovations in mental healthcare: harnessing the digital revolution. The British Journal of Psychiatry [Internet]. 2015 [cited 2023 May 5];206(4):263–5. https://www.cambridge.org/core/journals/the-british-journal-of-psychiatry/article/technological-innovations-in-mental-healthcare-harnessing-the-digital-revolution/05CBA5A580E121D4F82045DA95ADE5BE .

Lee Ventola C. Mobile Devices and Apps for Health Care Professionals: Uses and Benefits. Pharmacy and Therapeutics [Internet]. 2014 [cited 2023 May 5];39(5):356. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4029126/ .

Etchemendy E, Baños RM, Botella C, Castilla D, Alcañiz M, Rasal P, et al. An e-health platform for the elderly population: the butler system. Comput Educ. 2011;56(1):275–9.

Wilson J, Heinsch M, Betts D, Booth D, Kay-Lambkin F. Barriers and facilitators to the use of e-health by older adults: a scoping review. BMC Public Health [Internet]. 2021 [cited 2023 May 5];21(1):1–12. https://link.springer.com/articles/ https://doi.org/10.1186/s12889-021-11623-w .

Nymberg VM, Bolmsjö BB, Wolff M, Calling S, Gerward S, Sandberg M. ‘Having to learn this so late in our lives… Swedish elderly patients’ beliefs, experiences, attitudes and expectations of e-health in primary health care. http://www.manuscriptmanager.com/sjphc [Internet]. 2019 [cited 2023 May 5];37(1):41–52. Available from: http://https://www-tandfonline-com.kuleuven.e-bronnen.be/doi/abs/10.1080/02813432.2019.1570612 .

Pywell J, Vijaykumar S, Dodd A, Coventry L. Barriers to older adults’ uptake of mobile-based mental health interventions. Digit Health [Internet]. 2020 [cited 2023 May 5];6. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7016304/ .

Cajita MI, Hodgson NA, Lam KW, Yoo S, Han HR. Facilitators of and Barriers to mHealth Adoption in Older Adults with Heart Failure. Comput Inform Nurs [Internet]. 2018 [cited 2023 May 5];36(8):376. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6086749/ .

De Veer AJE, Peeters JM, Brabers AEM, Schellevis FG, Rademakers JJDJM, Francke AL. Determinants of the intention to use e-Health by community dwelling older people. BMC Health Serv Res [Internet]. 2015 [cited 2023 May 5];15(1). https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4364096/ .

Rasche P, Wille M, Bröhl C, Theis S, Schäfer K, Knobe M et al. Prevalence of Health App Use Among Older Adults in Germany: National Survey. JMIR Mhealth Uhealth. 2018;6(1):e26. https://mhealth.jmir.org/2018/1/e26 [Internet]. 2018 [cited 2023 May 5];6(1):e8619. Available from: https://mhealth.jmir.org/2018/1 .

Damschroder LJ, Reardon CM, Widerquist MAO, Lowery J. The updated Consolidated Framework for Implementation Research based on user feedback. Implementation Science 2022 17:1 [Internet]. 2022 [cited 2024 Mar 18];17(1):1–16. https://implementationscience.biomedcentral.com/articles/ https://doi.org/10.1186/s13012-022-01245-0 .

Morgan DL. Basic and Advanced Focus Groups. 2019.

Download references

Acknowledgements

The authors would like to thank all the participants for their time and invaluable contribution to this study.

This research received funding from the internal resources of KU Leuven (C26M/22/002).

Author information

Authors and affiliations.

Department of Public Health and Primary Care, KU Leuven, Leuven, Belgium

Laura Tops, Mei Lin Cromboom, Anouk Tans, Mieke Deschodt & Mieke Vermandere

Competence Center of Nursing, University Hospitals Leuven, Leuven, Belgium

Mieke Deschodt

Department of Neurosciences, Leuven Brain Institute, KU Leuven, Leuven, Belgium

Mathieu Vandenbulcke

University Psychiatric Center, KU Leuven, Leuven, Belgium

Mathieu Vandenbulcke & Mieke Vermandere

You can also search for this author in PubMed   Google Scholar

Contributions

LT took charge of designing, recruiting, analyzing, and writing the article. MC played a key role in recruitment, observed two out of three focus group sessions, and collaborated on data analysis with LT. AT contributed significantly by shaping the topic guide, moderating discussions, and participating in the article’s writing. MD, MaV, and MV were integral to the study’s design. All authors collectively approved the final publication version, taking responsibility for ensuring the accuracy and integrity of the entire work. They actively addressed and resolved any questions or issues that emerged during the investigation.

Corresponding author

Correspondence to Mieke Vermandere .

Ethics declarations

Ethics approval and consent to participate.

The research reported in this paper adhered to the principles of the Declaration of Helsinki. All participants gave written informed consent. The study was approved by the Ethical Committee of UZ/KU Leuven (S66783) and the local Ethical Committee of UPC KU Leuven (EC2022-679).

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Tops, L., Cromboom, M.L., Tans, A. et al. Healthcare providers’ perception of caring for older patients with depression and physical multimorbidity: insights from a focus group study. BMC Prim. Care 25 , 223 (2024). https://doi.org/10.1186/s12875-024-02447-9

Download citation

Received : 13 October 2023

Accepted : 23 May 2024

Published : 21 June 2024

DOI : https://doi.org/10.1186/s12875-024-02447-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Multimorbidity
  • Collaborative care
  • Depressive disorder
  • Older adults
  • Multidisciplinary teams

BMC Primary Care

ISSN: 2731-4553

how many respondents should be in qualitative research

  • Victor Yocco
  • Apr 9, 2024

Connecting With Users: Applying Principles Of Communication To UX Research

  • 30 min read
  • UX , User Research , Communication
  • Share on Twitter ,  LinkedIn

About The Author

Victor Yocco, PhD, has over a decade of experience as a UX researcher and research director. He is currently affiliated with Allelo Design and is taking on … More about Victor ↬

Email Newsletter

Weekly tips on front-end & UX . Trusted by 200,000+ folks.

Communication is in everything we do. We communicate with users through our research, our design, and, ultimately, the products and services we offer. UX practitioners and those working on digital product teams benefit from understanding principles of communication and their application to our craft. Treating our UX processes as a mode of communication between users and the digital environment can help unveil in-depth, actionable insights.

In this article, I’ll focus on UX research. Communication is a core component of UX research , as it serves to bridge the gap between research insights, design strategy, and business outcomes. UX researchers, designers, and those working with UX researchers can apply key aspects of communication theory to help gather valuable insights, enhance user experiences, and create more successful products.

Fundamentals of Communication Theory

Communications as an academic field encompasses various models and principles that highlight the dynamics of communication between individuals and groups. Communication theory examines the transfer of information from one person or group to another. It explores how messages are transmitted, encoded, and decoded, acknowledges the potential for interference (or ‘noise’), and accounts for feedback mechanisms in enhancing the communication process.

In this article, I will focus on the Transactional Model of Communication . There are many other models and theories in the academic literature on communication. I have included references at the end of the article for those interested in learning more.

The Transactional Model of Communication (Figure 1) is a two-way process that emphasizes the simultaneous sending and receiving of messages and feedback . Importantly, it recognizes that communication is shaped by context and is an ongoing, evolving process. I’ll use this model and understanding when applying principles from the model to UX research. You’ll find that much of what is covered in the Transactional Model would also fall under general best practices for UX research, suggesting even if we aren’t communications experts, much of what we should be doing is supported by research in this field.

Understanding the Transactional Model

Let’s take a deeper dive into the six key factors and their applications within the realm of UX research:

  • Sender: In UX research, the sender is typically the researcher who conducts interviews, facilitates usability tests, or designs surveys. For example, if you’re administering a user interview, you are the sender who initiates the communication process by asking questions.
  • Receiver: The receiver is the individual who decodes and interprets the messages sent by the sender. In our context, this could be the user you interview or the person taking a survey you have created. They receive and process your questions, providing responses based on their understanding and experiences.
  • Message: This is the content being communicated from the sender to the receiver. In UX research, the message can take various forms, like a set of survey questions, interview prompts, or tasks in a usability test.
  • Channel: This is the medium through which the communication flows. For instance, face-to-face interviews, phone interviews, email surveys administered online, and usability tests conducted via screen sharing are all different communication channels. You might use multiple channels simultaneously, for example, communicating over voice while also using a screen share to show design concepts.
  • Noise: Any factor that may interfere with the communication is regarded as ‘noise.’ In UX research, this could be complex jargon that confuses respondents in a survey, technical issues during a remote usability test, or environmental distractions during an in-person interview.
  • Feedback: The communication received by the receiver, who then provides an output, is called feedback. For example, the responses given by a user during an interview or the data collected from a completed survey are types of feedback or the physical reaction of a usability testing participant while completing a task.

Applying the Transactional Model of Communication to Preparing for UX Research

We can become complacent or feel rushed to create our research protocols. I think this is natural in the pace of many workplaces and our need to deliver results quickly. You can apply the lens of the Transactional Model of Communication to your research preparation without adding much time. Applying the Transactional Model of Communication to your preparation should:

  • Improve Clarity The model provides a clear representation of communication, empowering the researcher to plan and conduct studies more effectively.
  • Minimize misunderstanding By highlighting potential noise sources, user confusion or misunderstandings can be better anticipated and mitigated.
  • Enhance research participant participation With your attentive eye on feedback, participants are likely to feel valued, thus increasing active involvement and quality of input.

You can address the specific elements of the Transactional Model through the following steps while preparing for research:

Defining the Sender and Receiver

In UX research, the sender can often be the UX researcher conducting the study, while the receiver is usually the research participant. Understanding this dynamic can help researchers craft questions or tasks more empathetically and efficiently. You should try to collect some information on your participant in advance to prepare yourself for building a rapport.

For example, if you are conducting contextual inquiry with the field technicians of an HVAC company, you’ll want to dress appropriately to reflect your understanding of the context in which your participants (receivers) will be conducting their work. Showing up dressed in formal attire might be off-putting and create a negative dynamic between sender and receiver.

Message Creation

The message in UX research typically is the questions asked or tasks assigned during the study. Careful consideration of tenor, terminology, and clarity can aid data accuracy and participant engagement. Whether you are interviewing or creating a survey, you need to double-check that your audience will understand your questions and provide meaningful answers. You can pilot-test your protocol or questionnaire with a few representative individuals to identify areas that might cause confusion.

Using the HVAC example again, you might find that field technicians use certain terminology in a different way than you expect, such as asking them about what “tools” they use to complete their tasks yields you an answer that doesn’t reflect digital tools you’d find on a computer or smartphone, but physical tools like a pipe and wrench.

Choosing the Right Channel

The channel selection depends on the method of research. For instance, face-to-face methods might use physical verbal communication, while remote methods might rely on emails, video calls, or instant messaging. The choice of the medium should consider factors like tech accessibility, ease of communication, reliability, and participant familiarity with the channel. For example, you introduce an additional challenge (noise) if you ask someone who has never used an iPhone to test an app on an iPhone.

Minimizing Noise

Noise in UX research comes in many forms, from unclear questions inducing participant confusion to technical issues in remote interviews that cause interruptions. The key is to foresee potential issues and have preemptive solutions ready.

Facilitating Feedback

You should be prepared for how you might collect and act on participant feedback during the research. Encouraging regular feedback from the user during UX research ensures their understanding and that they feel heard. This could range from asking them to ‘think aloud’ as they perform tasks or encouraging them to email queries or concerns after the session. You should document any noise that might impact your findings and account for that in your analysis and reporting.

Track Your Alignment to the Framework

You can track what you do to align your processes with the Transactional Model prior to and during research using a spreadsheet. I’ll provide an example of a spreadsheet I’ve used in the later case study section of this article. You should create your spreadsheet during the process of preparing for research, as some of what you do to prepare should align with the factors of the model.

You can use these tips for preparation regardless of the specific research method you are undertaking. Let’s now look closer at a few common methods and get specific on how you can align your actions with the Transactional Model.

Applying the Transactional Model to Common UX Research Methods

UX research relies on interaction with users. We can easily incorporate aspects of the Transactional Model of Communication into our most common methods. Utilizing the Transactional Model in conducting interviews, surveys, and usability testing can help provide structure to your process and increase the quality of insights gathered.

Interviews are a common method used in qualitative UX research. They provide the perfect method for applying principles from the Transactional Model. In line with the Transactional Model, the researcher (sender) sends questions (messages) in-person or over the phone/computer medium (channel) to the participant (receiver), who provides answers (feedback) while contending with potential distraction or misunderstanding (noise). Reflecting on communication as transactional can help remind us we need to respect the dynamic between ourselves and the person we are interviewing. Rather than approaching an interview as a unidirectional interrogation, researchers need to view it as a conversation.

Applying the Transactional Model to conducting interviews means we should account for a number of facts to allow for high-quality communication. Note how the following overlap with what we typically call best practices.

Asking Open-ended Questions

To truly harness a two-way flow of communication, open-ended questions, rather than close-ended ones, are crucial. For instance, rather than asking, “Do you use our mobile application?” ask, “Can you describe your use of our mobile app?”. This encourages the participant to share more expansive and descriptive insights, furthering the dialogue.

Actively Listening

As the success of an interview relies on the participant’s responses, active listening is a crucial skill for UX researchers. The researcher should encourage participants to express their thoughts and feelings freely. Reflective listening techniques , such as paraphrasing or summarizing what the participant has shared, can reinforce to the interviewee that their contributions are being acknowledged and valued. It also provides an opportunity to clarify potential noise or misunderstandings that may arise.

Being Responsive

Building on the simultaneous send-receive nature of the Transactional Model, researchers must remain responsive during interviews. Providing non-verbal cues (like nodding) and verbal affirmations (“I see,” “Interesting”) lets participants know their message is being received and understood, making them feel comfortable and more willing to share.

We should always attempt to account for noise in advance, as well as during our interview sessions. Noise, in the form of misinterpretations or distractions, can disrupt effective communication. Researchers can proactively reduce noise by conducting a dry run in advance of the scheduled interviews . This helps you become more fluent at going through the interview and also helps identify areas that might need improvement or be misunderstood by participants. You also reduce noise by creating a conducive interview environment, minimizing potential distractions, and asking clarifying questions during the interview whenever necessary.

For example, if a participant uses a term the researcher doesn’t understand, the researcher should politely ask for clarification rather than guessing its meaning and potentially misinterpreting the data.

Additional forms of noise can include participant confusion or distraction. You should let participants know to ask if they are unclear on anything you say or do. It’s a good idea to always ask participants to put their smartphones on mute. You should only provide information critical to the process when introducing the interview or tasks. For example, you don’t need to give a full background of the history of the product you are researching if that isn’t required for the participant to complete the interview. However, you should let them know the purpose of the research, gain their consent to participate, and inform them of how long you expect the session to last.

Strategizing the Flow

Researchers should build strategic thinking into their interviews to support the Transaction Model. Starting the interview with less intrusive questions can help establish rapport and make the participant more comfortable, while more challenging or sensitive questions can be left for later when the interviewee feels more at ease.

A well-planned interview encourages a fluid dialogue and exchange of ideas. This is another area where conducting a dry run can help to ensure high-quality research. You and your dry-run participants should recognize areas where questions aren’t flowing in the best order or don’t make sense in the context of the interview, allowing you to correct the flow in advance.

While much of what the Transactional Model informs for interviews already aligns with common best practices, the model would suggest we need to have a deeper consideration of factors that we can sometimes give less consideration when we become overly comfortable with interviewing or are unaware of the implications of forgetting to address the factors of context considerations, power dynamics, and post-interview actions.

Context Considerations

You need to account for both the context of the participant, e.g., their background, demographic, and psychographic information, as well as the context of the interview itself. You should make subtle yet meaningful modifications depending on the channel you are conducting an interview.

For example, you should utilize video and be aware of your facial and physical responses if you are conducting an interview using an online platform, whereas if it’s a phone interview, you will need to rely on verbal affirmations that you are listening and following along, while also being mindful not to interrupt the participant while they are speaking.

Power Dynamics

Researchers need to be aware of how your role, background, and identity might influence the power dynamics of the interview. You can attempt to address power dynamics by sharing research goals transparently and addressing any potential concerns about bias a participant shares.

We are responsible for creating a safe and inclusive space for our interviews. You do this through the use of inclusive language, listening actively without judgment, and being flexible to accommodate different ways of knowing and expressing experiences. You should also empower participants as collaborators whenever possible . You can offer opportunities for participants to share feedback on the interview process and analysis. Doing this validates participants’ experiences and knowledge and ensures their voices are heard and valued.

Post-Interview Actions

You have a number of options for actions that can close the loop of your interviews with participants in line with the “feedback” the model suggests is a critical part of communication. Some tactics you can consider following your interview include:

  • Debriefing Dedicate a few minutes at the end to discuss the participant’s overall experience, impressions, and suggestions for future interviews.
  • Short surveys Send a brief survey via email or an online platform to gather feedback on the interview experience.
  • Follow-up calls Consider follow-up calls with specific participants to delve deeper into their feedback and gain additional insight if you find that is warranted.
  • Thank you emails Include a “feedback” section in your thank you email, encouraging participants to share their thoughts on the interview.

You also need to do something with the feedback you receive. Researchers and product teams should make time for reflexivity and critical self-awareness.

As practitioners in a human-focused field, we are expected to continuously examine how our assumptions and biases might influence our interviews and findings. “

We shouldn’t practice our craft in a silo. Instead, seeking feedback from colleagues and mentors to maintain ethical research practices should be a standard practice for interviews and all UX research methods.

By considering interviews as an ongoing transaction and exchange of ideas rather than a unidirectional Q&A, UX researchers can create a more communicative and engaging environment. You can see how models of communication have informed best practices for interviews. With a better knowledge of the Transactional Model, you can go deeper and check your work against the framework of the model.

The Transactional Model of Communication reminds us to acknowledge the feedback loop even in seemingly one-way communication methods like surveys. Instead of merely sending out questions and collecting responses, we need to provide space for respondents to voice their thoughts and opinions freely. When we make participants feel heard, engagement with our surveys should increase, dropouts should decrease, and response quality should improve.

Like other methods, surveys involve the researcher(s) creating the instructions and questionnaire (sender), the survey, including any instructions, disclaimers, and consent forms (the message), how the survey is administered, e.g., online, in person, or pen and paper (the channel), the participant (receiver), potential misunderstandings or distractions (noise), and responses (feedback).

Designing the Survey

Understanding the Transactional Model will help researchers design more effective surveys. Researchers are encouraged to be aware of both their role as the sender and to anticipate the participant’s perspective as the receiver. Begin surveys with clear instructions, explaining why you’re conducting the survey and how long it’s estimated to take. This establishes a more communicative relationship with respondents right from the start. Test these instructions with multiple people prior to launching the survey.

Crafting Questions

The questions should be crafted to encourage feedback and not just a simple yes or no. You should consider asking scaled questions or items that have been statistically validated to measure certain attributes of users.

For example, if you were looking deeper at a mobile banking application, rather than asking, “Did you find our product easy to use?” you would want to break that out into multiple aspects of the experience and ask about each with a separate question such as “On a scale of 1–7, with 1 being extremely difficult and 7 being extremely easy, how would you rate your experience transferring money from one account to another?” .

Reducing ‘noise,’ or misunderstandings, is crucial for increasing the reliability of responses. Your first line of defense in reducing noise is to make sure you are sampling from the appropriate population you want to conduct the research with. You need to use a screener that will filter out non-viable participants prior to including them in the survey. You do this when you correctly identify the characteristics of the population you want to sample from and then exclude those falling outside of those parameters.

Additionally, you should focus on prioritizing finding participants through random sampling from the population of potential participants versus using a convenience sample, as this helps to ensure you are collecting reliable data.

When looking at the survey itself, there are a number of recommendations to reduce noise. You should ensure questions are easily understandable, avoid technical jargon, and sequence questions logically. A question bank should be reviewed and tested before being finalized for distribution.

For example, question statements like “Do you use and like this feature?” can confuse respondents because they are actually two separate questions: do you use the feature, and do you like the feature? You should separate out questions like this into more than one question.

You should use visual aids that are relevant whenever possible to enhance the clarity of the questions. For example, if you are asking questions about an application’s “Dashboard” screen, you might want to provide a screenshot of that page so survey takers have a clear understanding of what you are referencing. You should also avoid the use of jargon if you are surveying a non-technical population and explain any terminology that might be unclear to participants taking the survey.

The Transactional Model suggests active participation in communication is necessary for effective communication . Participants can become distracted or take a survey without intending to provide thoughtful answers. You should consider adding a question somewhere in the middle of the survey to check that participants are paying attention and responding appropriately, particularly for longer surveys.

This is often done using a simple math problem such as “What is the answer to 1+1?” Anyone not responding with the answer of “2” might not be adequately paying attention to the responses they are providing and you’d want to look closer at their responses, eliminating them from your analysis if deemed appropriate.

Encouraging Feedback

While descriptive feedback questions are one way of promoting dialogue, you can also include areas where respondents can express any additional thoughts or questions they have outside of the set question list. This is especially useful in online surveys, where researchers can’t immediately address participant’s questions or clarify doubts.

You should be mindful that too many open-ended questions can cause fatigue , so you should limit the number of open-ended questions. I recommend two to three open-ended questions depending on the length of your overall survey.

Post-Survey Actions

After collecting and analyzing the data, you can send follow-up communications to the respondents. Let them know the changes made based on their feedback, thank them for their participation, or even share a summary of the survey results. This fulfills the Transactional Model’s feedback loop and communicates to the respondent that their input was received, valued, and acted upon.

You can also meet this suggestion by providing an email address for participants to follow up if they desire more information post-survey. You are allowing them to complete the loop themselves if they desire.

Applying the transactional model to surveys can breathe new life into the way surveys are conducted in UX research. It encourages active participation from respondents, making the process more interactive and engaging while enhancing the quality of the data collected. You can experiment with applying some or all of the steps listed above. You will likely find you are already doing much of what’s mentioned, however being explicit can allow you to make sure you are thoughtfully applying these principles from the field communication.

Usability Testing

Usability testing is another clear example of a research method highlighting components of the Transactional Model. In the context of usability testing, the Transactional Model of Communication’s application opens a pathway for a richer understanding of the user experience by positioning both the user and the researcher as sender and receiver of communication simultaneously.

Here are some ways a researcher can use elements of the Transactional Model during usability testing:

Task Assignment as Message Sending

When a researcher assigns tasks to a user during usability testing, they act as the sender in the communication process. To ensure the user accurately receives the message, these tasks need to be clear and well-articulated. For example, a task like “Register a new account on the app” sends a clear message to the user about what they need to do.

You don’t need to tell them how to do the task, as usually, that’s what we are trying to determine from our testing, but if you are not clear on what you want them to do, your message will not resonate in the way it is intended. This is another area where a dry run in advance of the testing is an optimal solution for making sure tasks are worded clearly.

Observing and Listening as Message Receiving

As the participant interacts with the application, concept, or design, the researcher, as the receiver, picks up on verbal and nonverbal cues. For instance, if a user is clicking around aimlessly or murmuring in confusion, the researcher can take these as feedback about certain elements of the design that are unclear or hard to use. You can also ask the user to explain why they are giving these cues you note as a way to provide them with feedback on their communication.

Real-time Interaction

The transactional nature of the model recognizes the importance of real-time interaction. For example, if during testing, the user is unsure of what a task means or how to proceed, the researcher can provide clarification without offering solutions or influencing the user’s action. This interaction follows the communication flow prescribed by the transactional model. We lose the ability to do this during unmoderated testing; however, many design elements are forms of communication that can serve to direct users or clarify the purpose of an experience (to be covered more in article two).

In usability testing, noise could mean unclear tasks, users’ preconceived notions, or even issues like slow software response. Acknowledging noise can help researchers plan and conduct tests better. Again, carrying out a pilot test can help identify any noise in the main test scenarios, allowing for necessary tweaks before actual testing. Other forms of noise can be less obvious but equally intrusive. For example, if you are conducting a test using a Macbook laptop and your participant is used to a PC, there is noise you need to account for, given their unfamiliarity with the laptop you’ve provided.

The fidelity of the design artifact being tested might introduce another form of noise. I’ve always advocated testing at any level of fidelity, but you should note that if you are using “Lorem Ipsum” or black and white designs, this potentially adds noise.

One of my favorite examples of this was a time when I was testing a financial services application, and the designers had put different balances on the screen; however, the total for all balances had not been added up to the correct total. Virtually every person tested noted this discrepancy, although it had nothing to do with the tasks at hand. I had to acknowledge we’d introduced noise to the testing. As at least one participant noted, they wouldn’t trust a tool that wasn’t able to total balances correctly.

Under the Transactional Model’s guidance, feedback isn’t just final thoughts after testing; it should be facilitated at each step of the process. Encouraging ‘think aloud’ protocols , where the user verbalizes their thoughts, reactions, and feelings during testing, ensures a constant flow of useful feedback.

You are receiving feedback throughout the process of usability testing, and the model provides guidance on how you should use that feedback to create a shared meaning with the participants. You will ultimately summarize this meaning in your report. You’ll later end up uncovering if this shared meaning was correctly interpreted when you design or redesign the product based on your findings.

We’ve now covered how to apply the Transactional Model of Communication to three common UX Research methods. All research with humans involves communication. You can break down other UX methods using the Model’s factors to make sure you engage in high-quality research.

Analyzing and Reporting UX Research Data Through the Lens of the Transactional Model

The Transactional Model of Communication doesn’t only apply to the data collection phase (interviews, surveys, or usability testing) of UX research. Its principles can provide valuable insights during the data analysis process.

The Transactional Model instructs us to view any communication as an interactive, multi-layered dialogue — a concept that is particularly useful when unpacking user responses. Consider the ‘message’ components: In the context of data analysis, the messages are the users’ responses. As researchers, thinking critically about how respondents may have internally processed the survey questions, interview discussion, or usability tasks can yield richer insights into user motivations.

Understanding Context

Just as the Transactional Model emphasizes the simultaneous interchange of communication, UX researchers should consider the user’s context while interpreting data. Decoding the meaning behind a user’s words or actions involves understanding their background, experiences, and the situation when they provide responses.

Deciphering Noise

In the Transactional Model, noise presents a potential barrier to effective communication. Similarly, researchers must be aware of snowballing themes or frequently highlighted issues during analysis. Noise, in this context, could involve patterns of confusion, misunderstandings, or consistently highlighted problems by users. You need to account for this, e.g., the example I provided where participants constantly referred to the incorrect math on static wireframes.

Considering Sender-Receiver Dynamics

Remember that as a UX researcher, your interpretation of user responses will be influenced by your understandings, biases, or preconceptions, just as the responses were influenced by the user’s perceptions. By acknowledging this, researchers can strive to neutralize any subjective influence and ensure the analysis remains centered on the user’s perspective. You can ask other researchers to double-check your work to attempt to account for bias.

For example, if you come up with a clear theme that users need better guidance in the application you are testing, another researcher from outside of the project should come to a similar conclusion if they view the data; if not, you should have a conversation with them to determine what different perspectives you are each bringing to the data analysis.

Reporting Results

Understanding your audience is crucial for delivering a persuasive UX research presentation. Tailoring your communication to resonate with the specific concerns and interests of your stakeholders can significantly enhance the impact of your findings. Here are some more details:

  • Identify Stakeholder Groups Identify the different groups of stakeholders who will be present in your audience. This could include designers, developers, product managers, and executives.
  • Prioritize Information Prioritize the information based on what matters most to each stakeholder group. For example, designers might be more interested in usability issues, while executives may prioritize business impact.
  • Adapt Communication Style Adjust your communication style to align with the communication preferences of each group. Provide technical details for developers and emphasize user experience benefits for executives.

Acknowledging Feedback

Respecting this Transactional Model’s feedback loop, remember to revisit user insights after implementing design changes. This ensures you stay user-focused, continuously validating or adjusting your interpretations based on users’ evolving feedback. You can do this in a number of ways. You can reconnect with users to show them updated designs and ask questions to see if the issues you attempted to resolve were resolved.

Another way to address this without having to reconnect with the users is to create a spreadsheet or other document to track all the recommendations that were made and reconcile the changes with what is then updated in the design. You should be able to map the changes users requested to updates or additions to the product roadmap for future updates. This acknowledges that users were heard and that an attempt to address their pain points will be documented.

Crucially, the Transactional Model teaches us that communication is rarely simple or one-dimensional. It encourages UX researchers to take a more nuanced, context-aware approach to data analysis, resulting in deeper user understanding and more accurate, user-validated results.

By maintaining an ongoing feedback loop with users and continually refining interpretations, researchers can ensure that their work remains grounded in real user experiences and needs. “

Tracking Your Application of the Transactional Model to Your Practice

You might find it useful to track how you align your research planning and execution to the framework of the Transactional Model. I’ve created a spreadsheet to outline key factors of the model and used this for some of my work. Demonstrated below is an example derived from a study conducted for a banking client that included interviews and usability testing. I completed this spreadsheet during the process of planning and conducting interviews. Anonymized data from our study has been furnished to show an example of how you might populate a similar spreadsheet with your information.

You can customize the spreadsheet structure to fit your specific research topic and interview approach. By documenting your application of the transactional model, you can gain valuable insights into the dynamic nature of communication and improve your interview skills for future research.

StageColumnsDescriptionExample
Pre-Interview PlanningTopic/Question (Aligned with research goals)Identify the research question and design questions that encourage open-ended responses and co-construction of meaning.Testing mobile banking app’s bill payment feature. How do you set up a new payee? How would you make a payment? What are your overall impressions?
Participant ContextNote relevant demographic and personal information to tailor questions and avoid biased assumptions.35-year-old working professional, frequent user of the online banking and mobile application but unfamiliar with using the app for bill pay.
Engagement StrategiesOutline planned strategies for active listening, open-ended questions, clarification prompts, and building rapport.Open-ended follow-up questions (“Can you elaborate on XYZ? Or Please explain more to me what you mean by XYZ.”), active listening cues, positive reinforcement (“Thank you for sharing those details”).
Shared UnderstandingList potential challenges to understanding participant’s perspectives and strategies for ensuring shared meaning.Initially, the participant expressed some confusion about the financial jargon I used. I clarified and provided simpler [non-jargon] explanations, ensuring we were on the same page.
During InterviewVerbal CuesTrack participant’s language choices, including metaphors, pauses, and emotional expressions.Participant used a hesitant tone when describing negative experiences with the bill payment feature. When questioned, they stated it was “likely their fault” for not understanding the flow [it isn’t their fault].
Nonverbal CuesNote participant’s nonverbal communication like body language, facial expressions, and eye contact.Frowning and crossed arms when discussing specific pain points.
Researcher ReflexivityRecord moments where your own biases or assumptions might influence the interview and potential mitigation strategies.Recognized my own familiarity with the app might bias my interpretation of users’ understanding [e.g., going slower than I would have when entering information]. Asked clarifying questions to avoid imposing my assumptions.
Power DynamicsIdentify instances where power differentials emerge and actions taken to address them.Participant expressed trust in the research but admitted feeling hesitant to criticize the app directly. I emphasized anonymity and encouraged open feedback.
Unplanned QuestionsList unplanned questions prompted by the participant’s responses that deepen understanding.What alternative [non-bank app] methods for paying bills that you use? (Prompted by participant’s frustration with app bill pay).
Post-Interview ReflectionMeaning Co-constructionAnalyze how both parties contributed to building shared meaning and insights.Through dialogue, we collaboratively identified specific design flaws in the bill payment interface and explored additional pain points and areas that worked well.
Openness and FlexibilityEvaluate how well you adapted to unexpected responses and maintained an open conversation.Adapted questioning based on participant’s emotional cues and adjusted language to minimize technical jargon when that issue was raised.
Participant FeedbackRecord any feedback received from participants regarding the interview process and areas for improvement.Thank you for the opportunity to be in the study. I’m glad my comments might help improve the app for others. I’d be happy to participate in future studies.
Ethical ConsiderationsReflect on whether the interview aligned with principles of transparency, reciprocity, and acknowledging power dynamics.Maintained anonymity throughout the interview and ensured informed consent was obtained. Data will be stored and secured as outlined in the research protocol.
Key Themes/QuotesUse this column to identify emerging themes or save quotes you might refer to later when creating the report.Frustration with a confusing interface, lack of intuitive navigation, and desire for more customization options.
Analysis NotesUse as many lines as needed to add notes for consideration during analysis.Add notes here.

You can use the suggested columns from this table as you see fit, adding or subtracting as needed, particularly if you use a method other than interviews. I usually add the following additional Columns for logistical purposes:

  • Date of Interview,
  • Participant ID,
  • Interview Format (e.g., in person, remote, video, phone).

By incorporating aspects of communication theory into UX research, UX researchers and those who work with UX researchers can enhance the effectiveness of their communication strategies, gather more accurate insights, and create better user experiences. Communication theory provides a framework for understanding the dynamics of communication, and its application to UX research enables researchers to tailor their approaches to specific audiences, employ effective interviewing techniques, design surveys and questionnaires, establish seamless communication channels during usability testing, and interpret data more effectively.

As the field of UX research continues to evolve, integrating communication theory into research practices will become increasingly essential for bridging the gap between users and design teams, ultimately leading to more successful products that resonate with target audiences.

As a UX professional, it is important to continually explore and integrate new theories and methodologies to enhance your practice . By leveraging communication theory principles, you can better understand user needs, improve the user experience, and drive successful outcomes for digital products and services.

Integrating communication theory into UX research is an ongoing journey of learning and implementing best practices. Embracing this approach empowers researchers to effectively communicate their findings to stakeholders and foster collaborative decision-making, ultimately driving positive user experiences and successful design outcomes.

References and Further Reading

  • The Mathematical Theory of Communication (PDF), Shannon, C. E., & Weaver, W.
  • From organizational effectiveness to relationship indicators: Antecedents of relationships, public relations strategies, and relationship outcomes , Grunig, J. E., & Huang, Y. H.
  • Communication and persuasion: Psychological studies of opinion change, Hovland, C. I., Janis, I. L., & Kelley, H. H. (1953). Yale University Press
  • Communication research as an autonomous discipline, Chaffee, S. H. (1986). Communication Yearbook, 10, 243-274
  • Interpersonal Communication: Everyday Encounters (PDF), Wood, J. (2015)
  • Theories of Human Communication , Littlejohn, S. W., & Foss, K. A. (2011)
  • McQuail’s Mass Communication Theory (PDF), McQuail, D. (2010)
  • Bridges Not Walls: A Book About Interpersonal Communication , Stewart, J. (2012)

Smashing Newsletter

Tips on front-end & UX, delivered weekly in your inbox. Just the things you can actually use.

Front-End & UX Workshops, Online

With practical takeaways, live sessions, video recordings and a friendly Q&A.

TypeScript in 50 Lessons

Everything TypeScript, with code walkthroughs and examples. And other printed books.

  • Open access
  • Published: 22 June 2024

A qualitative exploration of experts’ views about multi-dimensional aspects of hookah smoking control in Iran

  • Sara Dadipoor 1 ,
  • Azin Alavi 2 ,
  • Hadi Eshaghi Sani Kakhaki 1 ,
  • Nahid Shahabi 1 &
  • Zainab Kader 3  

BMC Public Health volume  24 , Article number:  1665 ( 2024 ) Cite this article

45 Accesses

Metrics details

The related literature has primarily addressed cigarette smoking control. It seems that researchers have failed to explore the determinants of hookah smoking (HS) control. In an attempt to fill this gap, the present study explores experts’ views about aspects of HS control in Bandar Abbas, a city in the south of Iran.

The present qualitative study, conducted in 2022 and 2023, used a content analysis. To this aim, 30 experts in tobacco prevention and control were invited to participate in the research. Twenty seven accepted the invitation. In-depth, semi-structured, and face-to-face interviews were held with the experts. A purposive sampling was used and the data collection continued until data saturation. The interviews lasted between 18 and 65 min. MAXQDA 10.0 was used for data management and analysis.

The expert interviewees had a mean age of 44.77 ± 6.57 years and a mean work experience of 18.6 ± 6.8 years. A total number of six main categories were extracted from the data, including usin influential figures to control HS, controlling HS by alternative activities, changing beliefs and attitudes toward HS, taking administrative and regulatory measures, and facilitating HS cessation.

This qualitative study explored the multifaceted ways people adopt to quit HS. Using influential figures to control hookah smoking, promoting alternative activities as a means of control, changing beliefs and attitudes, enforcing administrative regulations, and facilitating quit attempts all play an important role in tackling the prevalence of hookah smoking. These findings emphasize the importance of a comprehensive and multifaceted approach to integrate various interventions to effectively address hookah smoking behavior.

Peer Review reports

Introduction

Hookah is a smoking device used in many countries and is also known as waterpipe, argileh, shisha, goza and narghile. In this device, smoke passes through water in a bowl, where it is cooled and filtered before being inhaled. Hookah is a traditional device for tobacco consumption [ 1 ], originating from the Middle East. Today, it is globally popular particularly among young adults and women [ 2 , 3 ]. In the world, flavored tobacco and the absence of regulatory policies have led to the increased rate of hookah smoking (HS) [ 4 ]. As recently reported by WHO, tobacco consumption would account for 8 million cases of mortality worldwide on an annual basis [ 5 ]. As the research by Le et al. showed, current hookah smokers (HSs) had a 37% higher odds of mortality from all causes than non-smokers, while former HSs had a 39% higher odds of mortality from any cause than non-smokers [ 6 ].

According to a review article, most studies showed an increasing rate of HS between 2009 and 2016. This increase has ranged between 0.4 and 2.9% annually in East Mediterranean area and between 0.3% and 1% in Europe [ 7 ]. The prevalence of HS varies significantly across gender and region in the Middle East. In 2019, the prevalence among males and females was estimated to be 32.7% and 46.2%, respectively, in Lebanon, 13.4% and 7.8% in Jordan, and 18.0% and 7.9% in Palestine [ 8 ]. HS, especially among women, is becoming more and more socially acceptable as a normative behavior in the region [ 9 ]. In Iran, it is estimated that 82% of women who smoke tobacco use hookahs [ 10 ]. The overall prevalence of HS among Iranian women is reported to be 3.8–6.3% [ 11 , 12 ]. However, there are large regional variations in HS in Iran. The prevalence of HS in women in the southern provinces such as Hormozgan is 9–10 times as high as other provinces [ 13 ]. In Bandar Abbas in Hormozgan, the prevalence is 15.1%, which is higher among women than men [ 14 , 15 ]. The high prevalence of HS in Hormozgan can be due to the local culture, underestimated HS health risks, variety of jobs found in hookah cafes, and the lack of any tobacco control measures [ 16 , 17 ].

As a complicated behavior, HS is influenced by many internal and external factors. Some are personal, yet others are interpersonal, social, political and organizational. Among these factors are positive attitude, underestimated health risks of HS, psychological and social gaps, physical and mental attachment to hookah, family issues, media advertisement, ease of access (availability) and the absence of prohibitory rules and poor monitoring and management [ 16 , 18 , 19 ]. Family support, social and psychological needs, family norms, control of external stimuli and political factors have been among the major factors involved in hookah cessation [ 20 ].

Although the control of effective factors in HS or hookah cessation can, to some extent, help prevent this unhealthy behavior, exploring the determinants of HS control can be particularly useful. The related literature has focused more on controlling cigarette smoking and attended less to all aspects of cigarette smoking prevention and control. Each study in the literature has only addressed one aspect of the matter [ 21 , 22 , 23 ]. Researchers have largely neglected the exploration of determinants of HS control. To the best of the present researchers’ knowledge, few qualitative or quantitative studies have been conducted on tobacco control strategies, especially about HS. Thus, it is essential to fill this gap in literature. The field experts’ comments need to be solicited. The present study explores the experts’ views of the aspects of HS control in Bandar Abbas, a city in the south of Iran.

Materials and methods

Study design.

The present study employed a qualitative approach, and held in-depth, semi-structured, face to face interviews in Bandar Abbas, a city in the south of Iran in August 2022-June 2023.

Of note is that, in Bandar Abbas city in Hormozgan, HS has cultural-historical roots. Tobacco use has run in this city for long. More particularly, hookahs have passed down from older generations to the younger. Hookahs are commonly used to entertain guests in ceremonies of joy and sorrow.

The prevailing culture in Bandar Abbas normalizes HS more than cigarette smoking. HS is very common in women’s get-togethers [ 24 ]. Also, the weather conditions and facilities of the city have made HS a recreational activity for the public, especially during seasonal economic recessions when people have more spare time [ 16 ]. Moreover, the influence of stakeholders in tobacco industry has further spread HS in Bandar Abbas and southern Iran [ 16 ].

Participants

Initially, 30 experts in tobacco prevention and control were invited to participate in the study. Twenty-sevel experts accepted the invitation to participate. They had at least 5 years of work experience in controlling and preventing tobacco consumption. They had at least a bachelor’s degree of science to be included in the study.

Inclusion criteria

having academic qualification in the topic of interest.

Exclusion criterion

unwillingness to participate in the research.

The interview guide contained two parts, one enquiring about demographic information such as age, and place of residence, and the other concerning the participants’ overt and covert beliefs about the HS. The interview guide was checked by a panel of five experts in smoking control and qualitative research methodology to decide wether it was appropriate for the study. Adaptations (based on participants’ feedback) were made to the guide after the first five interviews. Once the interview guide was adapted and finalized, the final version was used as the basis of all remaining interviews. The interviews lasted between 18 and 65 min.

Each interview began with four main questions in the interview guide. As the interview continued, follow-up questions were asked to get more details. Probe questions were asked when further exploration was needed. Table  1 contains a list of questions that were asked during the interview.

Data collection

The interviews were conducted by two researchers. Each interview took approximately one hour. The interviews were conducted at a time and place convenient for the participants. All interviews were held in a quiet place such as the expert’s work office, a private room at the research center, or a place preferred by the participant such as a park or coffee shop. The sampling method was purposive and snowball. After each interview, the interviewee was asked to suggest the next participant. The anti-tobacco consumption organization in Hormozgan Province was visited to find the first expert to interview. After making an appointment with the first expert, the time and place of the interview were set as the interviewee preferred. When the interview was done, the interviewee was asked to suggest a colleague for the next interview. Therefore, purposive and snowball sampling were used to include the experts. The data collection continued until data saturation.

The following attempts were made to increase the rigor of findings: (1) Sufficient time was spent on data collection (August 2022-June 2023); (2) To make sure of the accuracy of researchers’ interpretation of expert comments, the findings were made available to eight participants via random sampling. After receiving their feedback, minor changes were made to the data; (3). The data were provided to the 2nd and 4th authors who were expert in qualitative research. Their comments helped define and revise the categories and sub-categories. To ensure the confidentiality of findings, the categories, sub-categories and a sample coding were provided to two external experts with a robust confidentiality agreement. Comments made by these experts and the present researchers were in some cases contradictory. These contradictions were resolved through discussion and in reference to the initial interviews. Initially, a total of 7 main categories were identified from the data. Among these categories, there was a contradiction in the number of two classes. Following discussion and decision-making by the authors, two classes were merged into a single category named “Using influential figures to control HS.” Additionally, there was a discrepancy in naming the 3 sub-categories within the categories.

Ethics approval and consent to participate

As for ethical considerations in this study, the procedure was approved by the Hormozgan University of Medical Sciences (#IR.HUMS.REC.1400.369). The purpose of study was revealed to all participants and they were ensured of the confidentiality of information they provided. All participants were required to sign an informed consent and were assured they could withdraw in any phase of research. All the research procedure was accordance with the relevant guidelines and regulations of research ethics.

Data analysis

All interviews were audio-recorded and then transcribed. After a detailed initial textual analysis of each interview, the next interview was made. The interviews were reviewed independently line by line with an open coding approach to identify the underlying concepts in participants’ statements. When the analysis went on, the code and category extraction followed. The similarities and differences were found and distinguished from each other in terms of inherent features and dimensions. Finally, through comparing the categories, some sub-categories were merged and the main categories were finally formed. Researchers reviewed all the extracted codes in a meeting and discussed the categories and subcategories. They agreed on the majority of categories and subcategories, and only disagreed on a few cases, later solved by referring to the initial interviews and re-examining the codes. The extracted codes were processed in MAXQDA10.

Among the 30 experts in tobacco consumption invited to participate in the study, 27 entered the study. One refused to participate due to work obligations. The mean age of the expert interviewees was 44.77 ± 6.57 years. Their work experience ranged between 5 and 28 years with a mean value of 18.6 ± 6.8. Table  2 summarizes other relevant information.

Totally, six categories and 20 sub-categories emerged from the data analysis. The amount of data was very large, so we decided to focus only on determinants that had been less addressed in the literature. “Changing beliefs and attitudes toward HS” is not discussed, and only five categories and 17 subcategories are dicussed here (Table  3 ).

The frequency and proportion of experts commenting on each subcategory are shown in Table  4 . The subcategories are listed in descending order. Family support, with the frequency of 88.89%, is the most frequqnetly discussed topic by experts in the interviews.

Using influential figures to control HS

“Using influential figures” showed to be a key determinant of HS control. This main category had several distinct sub-categories as addressed here.

Non-governmental organizations’ (NGOs) participation

As the majority of participants agreed, non-governmental organizations (NGOs) can bring many innovative ideas and potentials, and can significantly help prevent, control and cease HS if used besides executive governmental organizations. More public reliance on and better reception of NGOs can be one reason why NGOs should be involved in making the required preventive measures by the government. Below are some comments that the participants made on this category:

“Trying to incorporate NGOs can have dramatic effects because NGOs are created by people themselves. That is why the public trust them more, because they are seen as the link between people and the government. NGOs communic ate well with ordinary people”. (Female, 22 years of experience)

“NGOs have a great potential to help. If the government grants them a budget, they can manage it wisely. If NGOs have a well-defined goal, people welcome them and cooperate with them more”. (Female, 18 years of experience)

Family support

As the participants opined, family support and supervision can be a strong barrier to detrimental behaviors such as HS. Inadequate support can lead to deviation and improper decisions including one’s tendency towards HS.

“All factors affecting HS can be summarized as family support. If someone is both psychologically and spiritually supported by the family, s/he will hardly ever tend to smoke hookahs”. (Male, 20 years of experience)

Mass Media and social network activities

As the majority of participants agreed, forbidding any form of advertisement, direct or indirect, for hookahs in mass media can be an effective strategy to control and prevent HS. Introducing HS extensively as a health-threatening behavior in mass media can tremendously influence public belief and attitude. This is due to the trust people put in mass media. Below are some extracts from the participants’ accounts:

“Mass media has succeeded in annihilating certain unhealthy behaviors such as crack consumption. They highlighted the adverse effects and managed to create a deep fear of the drug in the public. Finally, the drug abuse was under control. HS can also be controlled in the same way”. (Female, 20 years of experience)

Peer education

Peer education was perceived by many participants as an effective strategy to control HS. Here is a sample extract from the interviews.

“I think if instructions are provided by peers, they are more effective because those emotions, attitudes and norms are better expressed. Teenagers listen carefully to peers and communicate with them better”. (Male, 5 years of experience)

Popular figures and celebrities

Many participants noted that the information provided or the kind of advice given by popular figures can significantly affect attitude to HS. These reliable sources can include family members, celebrities or popular football players among youngsters as well as clergymen who can talk against HS and discourage the negative behavior.

“If a celebrity begins to advertise against HS, that will help. Many followers will never want to smoke hookahs anymore or if they are already users, they may quit”. (Female, 16 years of experience)

Controlling HS by alternative activities to HS market/trade

The majority of experts indicated that appropriate alternative activities to HS market/trade can act as an effective strategy to control HS. Below are several comments by the participants to support this idea.

Innovative and creative entrepreneurship for sellers

Most participants agreed that it was essential to find an alternative job for those who earned a living by selling hookahs. If their job was not replaced with a better one, they would never cease selling hookahs, and many socially adverse effects could follow.

“The government is supposed to use the least budget available to provide hookah sellers an appropriate job. For instance, the government can help them with interest-free loans. Or it can create a market where all these ex-hookah-sellers work and earn a living”. (Female, 18 years of experience)

Controlling HS by alternatives to smoking as a habit/recreational activity

Controlling HS by alternatives to smoking as a habitual/recreational activity was another strategy suggested to control HS. As the interviewees commented, hookah can only be given up if it is replaced by a better choice. This will be explained here along with some extracts from interview content.

Setting up recreational facilities

Most participants agreed that extending recreational activities can significantly help control and reduce the rate of health-threatening behaviors such as HS. Unfortunately, Bandar Abbas is less equipped with recreational facilities than other cities. Thus, there is not a wide range of leisure activities to choose from.

“The more recreational facilities are provided for families, the less the probability of HS. Yet, these are largely absent here. Unfortunately, there is not even one good park or green space here”. (Male, 23 years of experience)

Reconstructing and renovating old urban areas (e.g., parks, gyms, pedestrian walks, biking lanes) which are red-spots for risky behaviors can be an effective strategy to prevent, control and cease HS.

Holding festivals and joyful activities

As most participants suggested, actively employing all existing sources can help prevent, control and cease HS. Instances of promising attempts are investment on young talents in art, music, theatre and the like, joyous celebrations in neighbourhoods, extending celebrations and festivals beyond official, indoor space to outdoor space and more specifically to neighbourhoods which can otherwise become a center for HS, and establishing the anti-hookah culture in such celebrations.

“I think if amusing programs were regularly planned in neighbourhoods, people would attend festivities or get-togethers instead of smoking hookahs. Such joyful events can provide a good chance for reminding people of the adverse effects of HS”. (Female, 23 years of experience)

Taking administrative and regulatory measures

Participants agreed that the development of new rules and regulations was an effective strategy. These new rules should be preventive, controlling and inhibitive. As the interviewees admitted, there was currently no law against HS. If there was any, it was hardly put into practice. The following sub-categories provide more insights into this matter:

Anti-HS legislating and enforcing regulations

As many participants pinpointed, giving heavy fines for HS can, to a great extent, reduce the rate of the unhealthy behavior.

“Though many restaurants and coffee-shops are not allowed to sell hookahs, they break the rules and provide HS services. Immediately after they are fined, they get back to the same old habit. It is because there has been no severe legal prosecution. A minor fine does nothing to stop a high-income restaurant or coffee-shop owner selling hookahs”. (Female, 18 years of experience)

Participatory administration

As recurrently stated by several participants, Mutual cooperation of authoritative organizations can dramatically affect HS prevention, control and cessation.

“All those partly in charge of the program should join and start working together. They are to support each other and there should be a division of labor. Now, it is not the case because each organization is working on its own and not as a team. There is no follow-up. One or two organizations alone cannot do the whole thing”. (Male, 28 years of experience)

Tax policies

As suggested by many participants, putting higher taxes on hookah service providers such as coffee-shops can effectively prevent and control HS in society. They suggested hookah selling shops be divided into two, smoking and non-smoking. The former should pay higher taxes (3 or 4 fold).

“In many European countries, there are higher taxes on cigarettes and tobacco products. The same should go here. Coffee-shops that offer hookahs should pay taxes three times as high as others”. (Female, 23 years of experience)

“Municipal taxes should be 3–4 fold for coffee-shops that sell hookahs. These shops should pay taxes this year as they threaten citizens’ health. The next year, it is their choice whether they will continue selling hookahs or not” (Female, 19 years of experience).

Citizens’ rights

Most participants contended that raising the society’s awareness of citizen rights can largely change public view of HS. Air pollution follows from HS and when the public perceive themselves deprived of their right to have clean air, they learn to complain to those polluting the air. Here is a relevant comment:

“I think awareness of citizen rights can be a great help. We can change the public view. If a family passes by and looks down on me, that will be the end of me. No need to talk anymore! The mere silence means this is our right to enjoy clean air. Certainly, that will help”. (Female, 18 years of experience)

Segregation of HS places

Segregation of specific places for HS was another strategy that many participants suggested to prevent, control and cease HS. This category actually shows the necessity of making strict laws to take away hookahs from public places and confine them to enclosed spaces.

“If anyone who used to freely smoke hookahs at the beach or in parks is now forced to go indoors for smoking and knows s/he cannot smoke hookahs outdoors anymore, s/he might lose interest in smoking hookahs” (Male, 9 years of experience).

Setting limits

Effective strategies could include concentrating all hookah selling centers in one place, forbidding the sale of hookahs to those below 18, forbidding the sale of hookahs for 10 consecutive cases, keeping a distance of at least 100 m from schools, setting certain limits such as no food or drink served besides hookahs, reducing the attraction and facilities of hookah selling places, forbidding music, trees or plants around the area and other similar facilities to control and prevent HS. Others include being strict in giving the required work permissions to applicants.

“Shops that serve hookahs should be at least 100 meters away from schools. If not, they may be tempting to students, especially high school students who may tend to try different flavours when they find a shop nearby”. (Male, 10 years of experience)

“Certain limits should be set. For example, a hookah smoker should not be allowed to do so in parks or greeneries. Then, gradually, we can set stricter rules and say, for example, HSs are not allowed to watch TV and so on. No side dish should be allowed to be served with hookahs. This can tremendously cut down on the original attraction”. (Male, 28 years of experience)

Facilitating HS cessation

Facilitating hookah cessation was another strategy suggested to control this tobacco product. It will be explained here along with extracts from interview content.

Founding HS cessation clinics

Trying to found tobacco cessation centers was mentioned as another back-up service to prevent, control and cease HS. The majority of smokers, when tired of the habit, look for places that can help them cease HS.

“If there are certain clinics exclusively established to help people cease HS, they can really help! People need to be notified at once and be encouraged to visit these clinics. The staff should be supportive experts that can attract people and teach them what to do in an interesting manner”. (Female, 21 years of experience)

“There exists no such a thing as an independent tobacco cessation clinic in our city! If such clinics are established and staffed with psychologists, physical educationalists and physicians, they will be a shekter to those tired of smoking”. (Male, 9 years of experience)

Motivational services

Most participants mentioned encouraging and motivating individuals or a mixture of motivational strategies could be an effective supportive strategy to control HS. Certain services such as travel ticket discount, concert ticket discount and gift cards for those who manage to cease HS can motivate them to continue the healthy behavior and encourage others to cease smoking. Allocating a budget to healthy entertainments such as cinema, concert, library and musical work can be another effective strategy in HS control. In other words, people can be provided with cultural activities at a low cost.

“If there is cultural subsidy for healthy reactions, for example, if they (i.e., the government) pay part of the cost for concerts, cinemas and gyms, everyone can enjoy healthy leisure at a low cost. The reason why almost everyone smokes hookahs is that it is a cheap amusement”. (Female, 17 years of experience)

Participants also believed that hookahs could not be taken away from consumers or salespeople unless they were replaced by appropriate hobbies.

“Obligation is not going to work! There should be some rewards. When something is taken away from someone, it needs to be replaced with something better. If you only think of HS as a hobby, you should begin to think what other hobbies can replace it. Even the salespeople should be provided with an alternative job”. (Female, 19 years of experience)

Mental health consultations

Many participants mentioned that mental health consultation can facilitate HS cessation. Supportive acts can include stress management through regular screening programs for mental health, active education on life skills from early childhood that can help people learn to react appropriately to stress, anger, temptation and learn to reject indecent suggestions made by peers. Another supportive service can be the establishment of centers to provide free face-to-face or on-call psychological services around the day. See the following comment.

“Most people find themselves smoking hookahs to escape stress. So, if such mental problems as stress are controlled from school days and even earlier from pre-school, what later leads to HS may be prevented”. (Male, 21 years of experience)

Concerning free psychological consultations, a participant quoted:

“If distressed families could refer to an advisor for help and be appropriately supported, they would for sure not have to retreat to HS to lower their stress. The advisor needs to be available and ready to help either face to face or on phone. Such advisors need to be supported by the executives” (Male, 26 years of experience).

The present research is pioneering in employing a qualitative content analysis to explore the determinants of HS control.

The interviewees believed that involving NGOs is a key strategy for HS control. Different NGOs, such as the Iranian Anti-Tobacco Association, are actively involved in tobacco control initiatives in Iran, with a focus on public health and environmental protection [ 25 , 26 ]. The Iranian government, through the National Tobacco Control Headquarters supported by the government and monitored by the Ministry of Health and Medical Education, cooperates with relevant ministries, authorities, and NGOs [ 27 ]. The National Tobacco Free Initiative Committee (NTFIC) has actively cooperated and transferred information between the government and NGOs to speed up tobacco control endeavors in Iran [ 28 ]. Thee have been similar efforts in other countries like Romania and Pakistan, where NGOs actively help control tobacco use in joint efforts with national and international parties, and encourage the involvement of different organizations [ 29 , 30 ]. In this regard, one study in India by Mondal et al. revealed that NGOs played a major role in tobacco control measures around the world. They acted effectively in raising the victims’ awareness and rehabilitating them by constantly supporting them in controlling this unhealthy behavior [ 31 ]. Therefore, it is suggested to use the capacity of NGOs in knowledge sharing and extending the culture further and allocating national budgets for its implementation.

Family support and supervision were found as another key strategy for HS control, according to the interviewees. This finding was also confirmed by other studies on family support which found it as an important factor in reducing the rate of HS [ 20 , 32 , 33 ]. Dana et al. studied adolescents in 42 countries and examined the long-term impact of family activities on adolescent smoking behavior in the United States. This study pinpointed the significant role of family support and supervision in reducing the rate of smoking among adolescents [ 34 ]. Family support can play a vital role in shaping attitudes and behaviors that help start and continue hookah use. Family support, especially during adolescence, has a continuous effect on reducing the risk of adolescent smoking [ 35 ]. Family support seems to play an important role in the tendency and desire to quit smoking When facing a challenge or stressor, others’ social support in an informal environment can help the adolescent cope with problems and stress. As a result, s/he will have a greater ability to manage the challenge or stressor, thus promoting supportive and close relationships. Fostering a supportive family environment and involving family members in cessation interventions can significantly contribute to lower rates of smoking and a healthier lifestyle.

The interviewees viewed mass media as another influential strategy to control HS. The use of appropriate health-promoting messages or motivational services is critical in supporting smoking cessation efforts [ 36 ]. It seems that mass media could advertise more effectively to tackle the issue at hand because people tend to trust them more; thus, acquiring information from these reliable sources can deeply influence their belief. Iran Ministry of Health has cooperated with relevant agencies to initiate a wide range of anti-tobacco mass media campaigns. These campaigns have mainly dealt with hookah consumption, youth, and females, and aimed to raise public awareness of the threats of tobacco consumption [ 27 ]. A relevant study among adults in the United States showed that mass media advertisements were positively correlated with the reduced rate of tobacco consumption [ 37 ]. Similarly, another study showed that mass media campaigns were considered a key strategy to reduce the rate of tobacco consumption among youngsters [ 38 ]. Mass media campaigns have been recognized as a powerful means of reducing tobacco consumption, especially among youngsters [ 39 ]. Mass media can be used for effective messaging in public health and for behavior change.

As the experts commented, peer education is another useful strategy for HS control. Peer education involves empowering community members to induce positive health changes within their peer group as a method of health promotion [ 40 ]. In an interventional study in Turkey among high school students, peer education was considered an effective method of changing tobacco smoking behavior [ 41 ]. The interactive nature of peer education makes it an important complement to HS control and other health promotion measures. Support groups, including peers, can play a low-cost but effective role in controlling unhealthy behaviors, such as HS. Peers understand each other better and accept health advice better from friends. Peer support groups also provide an opportunity to share experiences and eliminate the unhealthy behavior.

Information provision by popular figures and celebrities was another factor perceived by the interviewed experts as effective in controlling the above-mentioned unhealthy behavior. A relevant study in Iran among students of University of medical sciences showed that the advice from influential figures is an important factor in quitting smoking and reducing HS [ 42 ]. Celebrities often significantly influence their fans and followers, and their behaviors can shape social norms and perceptions [ 43 ]. This influence can be used to internalize cessation and reduction of smoking. Also, the engagement of celebrities in HS can normalize the behavior and create a perception of social acceptance. Targeting influential figures to promote healthy behaviors and discourage unhealthy behaviors can be an effective strategy to control the spread of HS and other unhealthy habits.

Another strategy suggested by the interviewees was alternative activities to HS market/ trade. One such alternative service was ‘innovative and creative entrepreneurship’ which involved finding an appropriate job to replace hookah sellers’ job. The rate of HS was higher in low- to average-income countries than high-income countries [ 44 ]. It appeared that economic pressures and lack of appropriate job opportunities led people to sell hookahs or offer hookah services. Hookah marketing has been probably considered an employment issue for low-income families with no better job opportunities. Local authorities ares suggested to provide special facilities to sellers to land suitable new jobs and reduce the sale of and access to tobacco products. Providing alternative economic opportunities, particularly through entrepreneurship and job creation programs, could be an effective strategy to control hookah use. To this aim, the underlying economic factors that lead people to hookah-related activities should be considered.

As the interviewed experts believed, another alternative strategy to smoking was the provision of recreational facilities. It seems that adding to the number of gyms and sport facilities in slums can significantly help prevent and control tobacco consumption. Some related Iranian research pointed out the lack of recreational facilities in Iran as an underlying reason for HS [ 45 , 46 ]. Arguably, Bandar Abbas, as the main city in Hormozgan Province, lacks proper public recreational facilities such as amusement parks. In this city, the only public entertainment is spending time on the beach. Since the beach and surrounding areas do not have any entertainment facilities for different age groups, many opportunists seize the chance to sell and rent hookahs, therefore, many people smoke hookahs as a leisure. Authorities are suggested to consider recreation seriously and act effectively to renova te urban space to better control and cease HS.

From the viewpoint of the interviewed experts, Organizing festivals and joyful activitieswas identified as another strategy for controlling HS. This idea was supported by an Iranian study mong high school students that revealed that non-HSs achieved a higher happiness score than HSs [ 47 ]. Using all the existing capacities of the society can increase pleasurable activities of all members of society. Furthermore, it can be assumed that those who often experience a high level of happiness have fewer emotional and behavioral problems. These people would therefore be less likely to orientate towards HS. Festivals and joyful events may provide a social context in which HS is more common and can probably lead to increased consumption. Essentially, there is a need for national policies to create appropriate opportunities for people to show happiness.

There is also a need for ‘formulating regulations’ which can significantly help tackle the problem. One such rule/regulation can be heavy fines. As similar research on youngsters and adolescents showed, fining children and teenagers for carrying any form of tobacco product managed to reduce the rate of tobacco consumption to a large extent [ 48 , 49 ]. Another study on reduced HS in youngsters in the United States showed that the anti-tobacco rule is mainly implemented for cigarettes and no strict rule has been set or implemented for hookahs [ 50 ]. It is noteworthy that while fines have been effective in reducing tobacco consumption, there is a lack of strict rules against HS in some regions. Thus, prohibitory rules and strict regulations, such as heavy fines, can be an effective way to prevent and control tobacco consumption, particularly HS, in Iran.

As the expert interviewees agreed, to control HS effectively, a participatory approach is needed to involve all relevant organizations. If the existing organizations in charge of HS control share duties and cooperate with each other, they can better manage to prevent and control the unhealthy behavior. Some research on proven strategies for smoking cessation showed that to challenge tobacco control, all organizations involved should act cooperatively and interdependently [ 51 ]. Probably, non-cooperative policies that the government makes were actively involved in HS control. Evidently, policymakers do not include the viewpoints of lower-ranking forces in HS control policymaking. If the comments made by lower-ranking forces or even smokers themselves are included, there will be better chances of compliance with rules and plans. Thus, policymakers are strongly recommended to take the advice by lower-ranking forces into account in decision making.

As the experts suggested, increasing tobacco taxes and prices is an effective measure for HS control. Increasing taxes, in a relevant work of research, managed to significantly lower the rate of smoking cigarettes [ 21 ]. A study by Hu, Mao, Shi, and Chen (2016) emphasized that increasing taxes is the easiest and most economical way to control tobacco consumption in China [ 52 ]. Higher taxes are followed by less demand in the market. Arguably, multifold taxation on coffee shops selling hookahs compared to others will reduce the profit of selling hookahs, which will be demotivating for sellers, and can reduce the supply of hookahs. Thus, it is expected that increasing taxes can reduce or correct the pattern of HS.

‘Familiarization of society with citizen rights’ was another effective strategy to control HS.

This factor shows that society’s awareness and understanding of individual rights can affect HS-related behaviors. When citizens get to know their rights and the consequences of HS, it can lead to more responsible and controlled HS behavior. A work of research revealed that a tobacco-free generation corresponds to citizen rights [ 53 ]. Katz (2005) showed that any attempt to control secondary tobacco smoke should be focused on individual rights. If people know it is their right to enjoy clean air, when they see others (HSs) depriving them of this right, they will react. This would not only affect their own belief but also that of the smoker. The latter needs to be more cautious as others can easily begin to complain. Therefore, this factor should not be neglected in controlling this unhealthy behavior.

The factor ‘Segregation of HS places’ was mentioned by the interviewees too as an effective strategy for HS control (HS). This approach involves creating designated areas or spaces specifically for hookah smoking, separate from other public areas. A systematic review revealed that segregating HS places can play a key role in controlling HS [ 19 ]. Another similar study showed that developing an anti-smoking rule in public places and implementing it carefully can lower the mean rate of smoking for about 4–10%. Thus, many people might cease smoking [ 54 ]. If HS is confined to particular places and banned in public space, it can help control HS effectively.

The expert interviewees believed that setting certain limits on the availability and purchase of hookahs can be an innovative rule to prevent, control or cease tobacco altogether. It appears that hookahs are more accessible to the public than other tobacco products. A body of research in Iran and Unites States point to the extensive and facile access to hookahs as a main reason for the high prevalence of HS [ 55 , 56 ]. Overall, tobacco use seems to be significantly lower in cities with strict rules than in cities without any strict restrictive rule and regulation. Making prohibitory rules and eliminating the positive attitude and increasing the socially negative attitude to HS can significantly help reduce access to hookahs.

The interviewed experts suggested that establishing tobacco cessation clinics (TCCs) was another strategy to control HS. A study showed that TCC was capable of satisfying tobacco smokers’ needs and managed to stop hookah cessation. By providing effective educational interventions, these clinics manage to help smokers stop smoking cigarettes [ 57 ]. TCCs can meet the needs of HSs and provide effective educational interventions to help them quit. By providing exclusive cessation services to hookah users, TCCs can be as effective in HS cessation as in cigarette smoking [ 58 , 59 ]. The existence of specialized smoking cessation clinics can point to the seriousness of this matter and encourage people to think about the adverse effects of HS. Therefore, building dedicated smoking cessation clinics for HS can be a great help for people who intend to quit hookahs.

The expert interviewees believed that providing motivational services was a strategy to control HS. A study at a Russian smoking-cessation center showed that individuals who were highly motivated to quit smoking had a success rate four times as high as those with lower motivation levels [ 58 ]. Providing appropriate motivational services, such as financial incentives, to individuals who have quit or intend to quit HS can effectively encourage and support their healthy behavior. A specific motivational service was suggested to be the provision of a cultural subsidy to address the affordability of hookah smoking in social settings. Roskin, Roskin and Aveyard (2009) reported that the low cost of HS among group amusements was a main reason for smoking hookahs [ 59 ]. By submitting a budget for cultural subsidies to increase healthy recreational activities, authorities can take effective measures to control this unhealthy behavior and encourage individuals to show healthier behaviors.

‘Mental health consultation’ was another strategies of HS control, as the interviewees suggested. A study of Armenian population in Tehran showed that a significant proportion of respondents raised the issue of frustration and psychological/spiritual problems at the outset of the unhealthy behavior of drug abuse [ 60 ]. Similarly, psychological needs and gaps were mentioned as the major reasons for HS [ 45 ]. It can be argued that people with insufficient problem-solving skills or failed self-assertion among friends turn to hookahs when feeling unhappy or lonely. Providing mental health counseling can help address the psychological aspects of HS and contribute to effective control measures. It proves the importance of mental health interventions as comprehensive strategies to prevent and reduce HS behaviors.

Strengths, limitations and suggestions for further research

There were certain limitations in the present research. As in all types of qualitative research, the researcher’s own beliefs and perceptions could have affected the procedures from conceptualization to communication with participants and data interpretation [ 61 ]. Though in the present research, exploratory heuristics was used in data analysis to directly extract the categories and subcategorise from the data, it was possible that the interview questions did not cover all effective factors in HS. To compensate for this, the interviews continued until data saturation. Despite the above-mentioned limitations, there were several strengths too. The expert participants were selected from among the most knowledgeable in this area, with the benefit of proving realistic information for HS control. Further research is needed to explore these strategies in more extensive areas and from all demographic groups so that we can have access to comprehensive data about the effective strategies to prevent and cease HS.

Implications

To the present researchers’ best knowledge, no study has been conducted to date to determine effective factors in HS control. The present findings can significantly fill the existing gap in the literature. Also, in future, these findings can form the basis of comparative studies. Finally, the present findings can guide policy makers to develop the necessary standards and guidelines to make effective plans and interventions to better control HS.

This qualitative study explored the multifaceted ways people adopt to quit HS. Using influential figures to control hookah smoking, promoting alternative activities as a means of control, changing beliefs and attitudes, enforcing administrative regulations, and facilitating quit attempts all play an important role in tackling the prevalence of hookah smoking. These findings emphasize the importance of a comprehensive and multifaceted approach to integrate various interventions to effectively address hookah smoking behavior. Moving forward, targeted interventions based on these categories can significantly help reduce the prevalence of hookah smoking and promote healthy lifestyles among individuals.

Data availability

The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

Hookah smoking

Hookah smokers

Non-governmental organization

Bhatnagar A, Maziak W, Eissenberg T, Ward KD, Thurston G, King BA, Sutfin EL, Cobb CO, Griffiths M, Goldstein LB. Water pipe (hookah) smoking and cardiovascular disease risk: a scientific statement from the American Heart Association. Circulation. 2019;139(19):e917–36.

Article   PubMed   PubMed Central   Google Scholar  

Bhargava SS, Das S, Priya H, Mishra D, Shivabasappa S, Sood A, Hazarika CR, Gupta PC, Chakma JK, Swasticharan L. The Burden and correlates of Waterpipe (Hookah) smoking among adolescents and youth: a systematic review. Subst Use Misuse. 2024;59(1):29–40.

Article   PubMed   Google Scholar  

Organization WH. Fact sheet: Tobacco. In. World Health Organization; 2023.

Maziak W, Taleb ZB, Bahelah R, Islam F, Jaber R, Auf R, Salloum RG. The global epidemiology of waterpipe smoking. Tob Control. 2015;24(Suppl 1):i3–12.

Tobacco key facts. World Health Organ [ https://www.who.int/news-room/fact-sheets/detail/tobacco.Accessed] ].

Le PH, Van Phan C, Truong DTT, Ho NM, Shuyna I, Le NT. Waterpipe tobacco smoking and risk of all-cause mortality: a prospective cohort study. Int J Epidemiol. 2024;53(1):dyad140.

Jawad M, Charide R, Waziry R, Darzi A, Ballout RA, Akl EA. The prevalence and trends of waterpipe tobacco smoking: a systematic review. PLoS ONE. 2018;13(2):e0192191.

Nakkash R, Khader Y, Chalak A, Abla R, Abu-Rmeileh NME, Mostafa A, Jawad M, Lee J-H, Salloum RG. Prevalence of cigarette and waterpipe tobacco smoking among adults in three Eastern Mediterranean countries: a cross-sectional household survey. BMJ Open. 2022;12(3):e055201.

Daou KN, Bou-Orm IR, Adib SM. Factors associated with waterpipe tobacco smoking among Lebanese women. Women Health. 2018;58(10):1124–34.

Meysamie A, Ghaletaki R, Haghazali M, Asgari F, Rashidi A, Khalilzadeh O, Esteghamati A, Abbasi M. Pattern of tobacco use among the Iranian adult population: results of the national survey of risk factors of non-communicable diseases (SuRFNCD-2007). Tob Control. 2010;19(2):125–8.

Teimourpour A, Yaseri M, Parsaeian M, Bagherpour Kalo M, Hosseini M. Application of mixture cure with the doubly censoring model in estimation of initiation age and prevalence of Water-Pipe Smoking in Iran: a New Approach in Population-Based studies. Tanaffos. 2020;19(3):243–9.

PubMed   PubMed Central   Google Scholar  

Baheiraei A, Mirghafourvand M, Nedjat S, Mohammadi E, Mohammad-Alizadeh Charandabi S. Prevalence of water pipe use and its correlates in Iranian women of reproductive age in Tehran: a population-based study. Med Princ Pract. 2012;21(4):340–4.

Nemati S, Rafei A, Freedman ND, Fotouhi A, Asgary F, Zendehdel K. Cigarette and Water-Pipe Use in Iran: geographical distribution and Time trends among the Adult Population; a pooled analysis of national STEPS surveys, 2006–2009. Archives Iran Med (AIM) 2017, 20(5).

Faghir Ganji M, Asgari E, Jabbari M, Nematollahi S, Hosseini M, Ahmadi-Gharaei H, ArabAhmadi A, Ostad Ghaderi M, Holakouie-Naieni K. Community health assessment: knowledge, attitude and practice of women regarding water-pipe smoking in Bandar Abbas. MethodsX. 2019;6:442–6.

Ghanbarnejad A, Aghamolaei T, Ghafari HR, Daryafti H. Hookah smoking and associated factors in rural region of Hormozgan, Iran. Zahedan J Res Med Sci 2012, 14(9).

Dadipoor S, Kok G, Aghamolaei T, Ghaffari M, Heyrani A, Ghanbarnezhad A. Explaining the determinants of hookah consumption among women in southern Iran: a qualitative study. BMC Public Health. 2019;19(1):1–13.

Article   Google Scholar  

Dadipoor S, Heyrani A, Aghamolaei T, Ghanbarnezhad A, Ghaffari M. Predictors of Hookah smoking among women in Bandar Abbas, Southern Iran: a cross-sectional study based on the intervention mapping protocol. Subst Use Misuse. 2020;55(11):1800–7.

Momenabadi V, Hashemi SY, Borhaninejad VR. Factors affecting hookah smoking trend in the society: a review article. Addict Health. 2016;8(2):123.

Dadipoor S, Kok G, Aghamolaei T, Heyrani A, Ghaffari M, Ghanbarnezhad A. Factors associated with hookah smoking among women: a systematic review. Tob Prev Cessat. 2019;5:26.

Dadipoor S, Kok G, Heyrani A, Aghamolaei T, Ghaffari M, Ghanbarnezhad A. Explaining the determinants of hookah smoking cessation among southern Iranian women: a qualitative study. J Subst Use 2020:1–6.

Ho L-M, Schafferer C, Lee J-M, Yeh C-Y, Hsieh C-J. Raising cigarette excise tax to reduce consumption in low-and middle-income countries of the Asia-Pacific region: a simulation of the anticipated health and taxation revenues impacts. BMC Public Health. 2018;18(1):1187.

Wang RJ, Bhadriraju S, Glantz SA. E-cigarette use and adult cigarette smoking cessation: a meta-analysis. Am J Public Health. 2021;111(2):230–46.

Kotz D, Batra A, Kastaun S. Smoking cessation attempts and common strategies employed: a Germany-wide representative survey conducted in 19 waves from 2016 to 2019 (the DEBRA Study) and analyzed by socioeconomic status. Deutsches Ärzteblatt International. 2020;117(1–2):7.

Shahabi N, Shahbazi Sighaldeh S, Eshaghi Sani Kakhaki H, Mohseni S, Dadipoor S, El-Shahawy O. The effectiveness of a theory -based health education program on waterpipe smoking cessation in Iran: one year follow-up of a quasi-experimental research. BMC Public Health. 2024;24(1):664.

Masjedi M, Ghaffari S, Roshanfekr P, Hessari MB, Hamzehali S, Mehrjardi AA, Moaaf E, Shahsavan H. Implementing Prevention against Tobacco Dependence (PAD) toward the Tobacco-Free Schools, neighborhoods, and cities: study protocol. J Res Health Sci. 2020;20(3):e00490.

Sanadgol A, Doshmangir L, Khodayari-Zarnaq R, Sergeevich Gordeev V. Role of non-governmental organizations in moving toward universal health coverage: a case study in Iran. Front Public Health. 2022;10:985079.

Ravaghi H, Tourani S, Khodayari-Zarnaq R, Aghapour B, Pishgoo A, Arabloo J. Agenda-setting of tobacco control policy in Iran: a retrospective policy analysis study. BMC Public Health. 2021;21(1):2288.

Mohamed SF, Juma P, Asiki G, Kyobutungi C. Facilitators and barriers in the formulation and implementation of tobacco control policies in Kenya: a qualitative study. BMC Public Health. 2018;18(Suppl 1):960.

Eremia M, Radu-Loghin C, Lotrean LM. The role of non-governmental organizations in tobacco control in Romania. Tob Induc Dis 2018, 16(1).

Khan NU, Ahmed N, Subhani F, Kerai S, Zia N. Role of non-governmental organizations in the Prevention and Control of Poisoning in Pakistan. Asia Pac J Med Toxicol 2019, 8(2).

Mondal S, Van Belle S, Bhojani U, Law S, Maioni A. Policy processes in multisectoral tobacco control in India: the role of institutional architecture, political engagement and legal interventions. Int J Health Policy Manage. 2022;11(9):1703–14.

Google Scholar  

Kader Z, Crutzen R, Roman N. Intervention to reduce adolescent hookah pipe use and satisfy basic psychological needs. Cogent Psychol. 2020;7(1):1782099.

Nagawa CS, Pbert L, Wang B, Cutrona SL, Davis M, Lemon SC, Sadasivam RS. Association between family or peer views towards tobacco use and past 30-day smoking cessation among adults with mental health problems. Prev Med Rep. 2022;28:101886.

Dana A, Christodoulides E, Baniasadi T, Ghorbani S. Effects of Family-related activities on adolescent smoking in the United States: evidence from a longitudinal study. Int J Pediatr. 2022;10(3):15535–46.

Zaborskis A, Kavaliauskienė A, Eriksson C, Klemera E, Dimitrova E, Melkumova M, Husarova D. Family Support as Smoking Prevention during transition from early to late adolescence: a study in 42 countries. Int J Environ Res Public Health 2021, 18(23).

Villanti AC, Peasley-Miklus C, Cha S, Schulz J, Klemperer EM, LePine SE, West JC, Mays D, Mermelstein R, Higgins ST et al. Tailored text message and web intervention for smoking cessation in U.S. socioeconomically-disadvantaged young adults: a randomized controlled trial. Prev Med 2022, 165(Pt B):107209.

Emery S, Kim Y, Choi YK, Szczypka G, Wakefield M, Chaloupka FJ. The effects of smoking-related television advertising on smoking and intentions to quit among adults in the United States: 1999–2007. Am J Public Health. 2012;102(4):751–7.

Health UDo, Services H. Preventing tobacco use among youth and young adults: a report of the Surgeon General. In.: Atlanta, GA: US Department of Health and Human Services, Centers for Disease &#8230.

Allen JA, Duke JC, Davis KC, Kim AE, Nonnemaker JM, Farrelly MC. Using mass media campaigns to reduce youth tobacco use: a review. Am J Health Promotion. 2015;30(2):e71–82.

Dodd S, Widnall E, Russell AE, Curtin EL, Simmonds R, Limmer M, Kidger J. School-based peer education interventions to improve health: a global systematic review of effectiveness. BMC Public Health. 2022;22(1):2247.

Bilgiç N, Günay T. Evaluation of effectiveness of peer education on smoking behavior among high school students. Saudi Med J. 2018;39(1):74.

Shamsipoor MKBR, Mohammad Pour Asl A, Mansouri A. smoking status and factors influecing smoking cession among students of University of medical sciences,Tabriz,Iran. Journal of Knowledge & Health, 6 Iranian condress of Epidemiology and public Health 2010,july 13–15 Tuesday. Shahroud University of Medical Sciences, Shahroud, Iran, 5.

Jafari A, Mahdizadeh M, Peyman N, Gholian-Aval M, Tehrani H. Exploration the role of social, cultural and environmental factors in tendency of female adolescents to smoking based on the qualitative content analysis. BMC Womens Health. 2022;22(1):38.

Stone E, Peters M. Young low and middle-income country (LMIC) smokers—implications for global tobacco control. Translational lung cancer Res. 2017;6(Suppl 1):S44.

Baheiraei A, Sighaldeh SS, Ebadi A, Kelishadi R, Majdzadeh SR. Psycho-social needs impact on hookah smoking initiation among women: a qualitative study from Iran. Int J Prev Med 2015, 6.

Majdzadeh R, Zamani G, Kazemi H. Qualitative study of people’s attitudes to smoking hookah and the ways to combat it in Hormozgan city. Hakim. 2002;5(3):183–7.

Ataeiasl M, Sarbakhsh P, Dadashzadeh H, Augner C, Anbarlouei M, Mohammadpoorasl A. Relationship between happiness and tobacco smoking among high school students. Epidemiol Health 2018, 40.

Jason LA, Pokorny SB, Schoeny ME. Evaluating the effects of enforcements and fines on youth smoking. Crit Public Health. 2003;13(1):33–45.

Wakefield M, Giovino G. Teen penalties for tobacco possession, use, and purchase: evidence and issues. Tob Control. 2003;12(suppl 1):i6–13.

Morris DS, Fiala SC, Pawlak R. Peer reviewed: opportunities for Policy interventions to reduce Youth Hookah Smoking in the United States. Prev Chronic Dis 2012, 9.

Fowler G. Proven strategies for smoking cessation: adopting a global approach. Eur J Pub Health. 2000;10(suppl3):3–4.

Hu T-w, Mao Z, Shi J, Chen W. The role of taxation in tobacco control and its potential economic impact in China. Economics of Tobacco Control in China: from Policy Research to practice. edn.: World Scientific; 2016. pp. 149–68.

van der Eijk Y, Porter G. Human rights and ethical considerations for a tobacco-free generation. Tob Control. 2015;24(3):238–42.

Woollery T, Asma S, Sharp D, Mundial B. Clean indoor-air laws and youth access restrictions. Prabhat Jha and Contraband Tobacco in Canada/53 2000.

Baheiraei A, Sighaldeh SS, Ebadi A, Kelishadi R, Majdzadeh R. Factors that contribute in the first hookah smoking trial by women: a qualitative study from Iran. Iran J Public Health. 2015;44(1):100.

Cobb C, Ward KD, Maziak W, Shihadeh AL, Eissenberg T. Waterpipe tobacco smoking: an emerging health crisis in the United States. Am J Health Behav. 2010;34(3):275–85.

Elizabeth H. Prevention of Tobacco Use and the Mass Media. Int J Manage Social Sci. 2016;4(6):54–65.

Levshin V, Slepchenko N. Determinants of smoking cessation and abstinence in a Russian smoking-cessation center. 2017.

Roskin J, Aveyard P. Canadian and English students’ beliefs about waterpipe smoking: a qualitative study. BMC Public Health. 2009;9(1):10.

farhoudian A, sadr sadat J. Mohamadi f,manokian A,jafari F et al: knowledge and attitude of a Group of armenians in Tehran to addiction and substance abuse. J Cogn Sci. 2008;10(2):9–20.

Kuper A, Reeves S, Levinson W. An introduction to reading and appraising qualitative research. BMJ. 2008;337(7666):404–7.

Download references

Acknowledgements

The authors would like to thank Hormozgan University of Medical Sciences for their financial support. The authors would also like to express their gratitude to the participants for their sincere cooperation in this study.

This project received a research grant from Hormozgan University of Medical Sciences and National Institute for Medical Research Development Grant No. 983514. The funding body was not involved in the design of study data collection, data analysis, and interpretation of data and in writing the manuscript.

Author information

Authors and affiliations.

Tobacco and Health Research Center, Hormozgan University of Medical Sciences, Bandar Abbas, Bandar Abbas, Iran

Sara Dadipoor, Hadi Eshaghi Sani Kakhaki & Nahid Shahabi

Mother and Child Welfare Research Center, Hormozgan University of Medical Sciences, Bandar Abbas, Iran

The Centre for Interdisciplinary Studies of Children, Families and Society, University of the Western Cape, Bellville, South Africa

Zainab Kader

You can also search for this author in PubMed   Google Scholar

Contributions

SD contributed to the design and interview with participants, analysis, interpretation and drafting of the research manuscript. NSH and HESK contributed to the inception, design, interpretation and final approval of the manuscript for publication. ZK and ERN contributed to the data analysis, interpretation and editing. All authors read and approved the final manuscript.

Corresponding authors

Correspondence to Hadi Eshaghi Sani Kakhaki or Nahid Shahabi .

Ethics declarations

Ethical approval.

This study was approved by the ethics committee of Hormozgan University of Medical Sciences (#IR.HUMS.REC.1400.369). A written informed consent was obtained from each eligible respondent. All research procedure abided by the relevant guidelines and regulations.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Dadipoor, S., Alavi, A., Eshaghi Sani Kakhaki, H. et al. A qualitative exploration of experts’ views about multi-dimensional aspects of hookah smoking control in Iran. BMC Public Health 24 , 1665 (2024). https://doi.org/10.1186/s12889-024-19139-9

Download citation

Received : 01 October 2023

Accepted : 13 June 2024

Published : 22 June 2024

DOI : https://doi.org/10.1186/s12889-024-19139-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Nicotine dependence
  • Smoking cessation
  • Qualitative study

BMC Public Health

ISSN: 1471-2458

how many respondents should be in qualitative research

how many respondents should be in qualitative research

Overview and key findings of the 2024 Digital News Report

how many respondents should be in qualitative research

This year’s report comes at a time when around half the world’s population have been going to the polls in national and regional elections, and as wars continue to rage in Ukraine and Gaza. In these troubled times, a supply of accurate, independent journalism remains more important than ever, and yet in many of the countries covered in our survey we find the news media increasingly challenged by rising mis- and disinformation, low trust, attacks by politicians, and an uncertain business environment.

Our country pages this year are filled with examples of layoffs, closures, and other cuts due to a combination of rising costs, falling advertising revenues, and sharp declines in traffic from social media. In some parts of the world these economic challenges have made it even harder for news media to resist pressures from powerful businesspeople or governments looking to influence coverage and control narratives.

There is no single cause for this crisis; it has been building for some time, but many of the immediate challenges are compounded by the power and changing strategies of rival big tech companies, including social media, search engines, and video platforms. Some are now explicitly deprioritising news and political content, while others have switched focus from publishers to ‘creators’, and pushing more fun and engaging formats – including video – to keep more attention within their own platforms. These private companies do not have any obligations to the news, but with many people now getting much of their information via these competing platforms, these shifts have consequences not only for the news industry, but also our societies. As if this were not enough, rapid advances in artificial intelligence (AI) are about to set in motion a further series of changes including AI-driven search interfaces and chatbots that could further reduce traffic flows to news websites and apps, adding further uncertainty to how information environments might look in a few years.

Our report this year documents the scale and impact of these ‘platform resets’. With TikTok, Instagram Reels, and YouTube on the rise, we look at why consumers are embracing more video consumption and investigate which mainstream and alternative accounts – including creators and influencers – are getting most attention when it comes to news. We also explore the very different levels of confidence people have in their ability to distinguish between trustworthy and untrustworthy content on a range of popular third-party platforms around the world. For the first time in our survey, we also take a detailed look at consumer attitudes towards the use of AI in the news, supported by qualitative research in three countries (the UK, US, and Mexico). As publishers rapidly adopt AI, to make their businesses more efficient and to personalise content, our research suggests they need to proceed with caution, as the public generally wants humans in the driving seat at all times.

With publishers struggling to connect with much of the public, and growing numbers of people selectively (and in some cases continuously) avoiding the news, we have also explored different user needs to understand where the biggest gaps lie between what audiences want and what publishers currently provide. And we look at the price that some consumers are currently paying for online news and what might entice more people to join them. 

An episode on the findings

Spotify | Apple

This 13th edition of our Digital News Report , which is based on data from six continents and 47 markets, reminds us that these changes are not always evenly distributed. While journalism is struggling overall, in some parts of the world news media remain profitable, independent, and widely trusted. But even in these countries, we find challenges around the pace of change, the role of platforms, and how to adapt to a digital environment that seems to become more complex and fragmented every year. The overall story is captured in this Executive Summary, followed by Section 1 with chapters containing additional analysis, and then individual country and market pages in Section 2.

Here is a summary of some of the key findings from our 2024 research.

In many countries, especially outside Europe and the United States, we find a significant further decline in the use of Facebook for news and a growing reliance on a range of alternatives including private messaging apps and video networks. Facebook news consumption is down 4 percentage points, across all countries, in the last year.

News use across online platforms is fragmenting, with six networks now reaching at least 10% of our respondents, compared with just two a decade ago. YouTube is used for news by almost a third (31%) of our global sample each week, WhatsApp by around a fifth (21%), while TikTok (13%) has overtaken Twitter (10%), now rebranded X, for the first time.

Linked to these shifts, video is becoming a more important source of online news, especially with younger groups. Short news videos are accessed by two-thirds (66%) of our sample each week, with longer formats attracting around half (51%). The main locus of news video consumption is online platforms (72%) rather than publisher websites (22%), increasing the challenges around monetisation and connection.

Although the platform mix is shifting, the majority continue to identify platforms including social media, search, or aggregators as their main gateway to online news. Across markets, only around a fifth of respondents (22%) identify news websites or apps as their main source of online news – that’s down 10 percentage points on 2018. Publishers in a few Northern European markets have managed to buck this trend, but younger groups everywhere are showing a weaker connection with news brands than they did in the past.

Turning to the sources that people pay most attention to when it comes to news on various platforms, we find an increasing focus on partisan commentators, influencers, and young news creators, especially on YouTube and TikTok. But in social networks such as Facebook and X, traditional news brands and journalists still tend to play a prominent role.

Concern about what is real and what is fake on the internet when it comes to online news has risen by 3 percentage points in the last year with around six in ten (59%) saying they are concerned. The figure is considerably higher in South Africa (81%) and the United States (72%), both countries that have been holding elections this year.

Worries about how to distinguish between trustworthy and untrustworthy content in online platforms is highest for TikTok and X when compared with other online networks. Both platforms have hosted misinformation or conspiracies around stories such as the war in Gaza, and the Princess of Wales’s health, as well as so-called ‘deep fake’ pictures and videos.

As publishers embrace the use of AI we find widespread suspicion about how it might be used, especially for ‘hard’ news stories such as politics or war. There is more comfort with the use of AI in behind-the-scenes tasks such as transcription and translation; in supporting rather than replacing journalists. 

Trust in the news (40%) has remained stable over the last year, but is still four points lower overall than it was at the height of the Coronavirus pandemic. Finland remains the country with the highest levels of overall trust (69%), while Greece (23%) and Hungary (23%) have the lowest levels, amid concerns about undue political and business influence over the media.

Elections have increased interest in the news in a few countries, including the United States (+3), but the overall trend remains downward. Interest in news in Argentina, for example, has fallen from 77% in 2017 to 45% today. In the United Kingdom interest in news has almost halved since 2015. In both countries the change is mirrored by a similar decline in interest in politics.

At the same time, we find a rise in selective news avoidance. Around four in ten (39%) now say they sometimes or often avoid the news – up 3 percentage points on last year’s average – with more significant increases in Brazil, Spain, Germany, and Finland. Open comments suggest that the intractable conflicts in Ukraine and the Middle East may have had some impact. In a separate question, we find that the proportion that say they feel ‘overloaded’ by the amount of news these days has grown substantially (+11pp) since 2019 when we last asked this question.

In exploring user needs around news, our data suggest that publishers may be focusing too much on updating people on top news stories and not spending enough time providing different perspectives on issues or reporting stories that can provide a basis for occasional optimism. In terms of topics, we find that audiences feel mostly well served by political and sports news but there are gaps around local news in some countries, as well as health and education news.

Our data show little growth in news subscription, with just 17% saying they paid for any online news in the last year, across a basket of 20 richer countries. North European countries such as Norway (40%) and Sweden (31%) have the highest proportion of those paying, with Japan (9%) and the United Kingdom (8%) amongst the lowest. As in previous years, we find that a large proportion of digital subscriptions go to just a few upmarket national brands – reinforcing the winner-takes-most dynamics that are often linked with digital media.

In some countries we find evidence of heavy discounting, with around four in ten (41%) saying they currently pay less than the full price. Prospects of attracting new subscribers remain limited by a continued reluctance to pay for news, linked to low interest and an abundance of free sources. Well over half (55%) of those that are not currently subscribing say that they would pay nothing for online news, with most of the rest prepared to offer the equivalent of just a few dollars per month, when pressed. Across markets, just 2% of non-payers say that they would pay the equivalent of an average full price subscription.

News podcasting remains a bright spot for publishers, attracting younger, well-educated audiences but is a minority activity overall. Across a basket of 20 countries, just over a third (35%) access a podcast monthly, with 13% accessing a show relating to news and current affairs. Many of the most popular podcasts are now filmed and distributed via video platforms such as YouTube and TikTok.

The great platform reset is underway

Online platforms have shaped many aspects of our lives over the last few decades, from how we find and distribute information, how we are advertised to, how we spend our money, how we share experiences, and most recently, how we consume entertainment. But even as online platforms have brought great convenience for consumers – and advertisers have flocked to them – they have also disrupted traditional publishing business models in very profound ways. Our data suggest we are now at the beginning of a technology shift which is bringing a new wave of innovation to the platform environment, presenting challenges for incumbent technology companies, the news industry, and for society.

Platforms have been adjusting strategies in the light of generative AI, and are also navigating changing consumer behaviour, as well as increased regulatory concerns about misinformation and other issues. Meta in particular has been trying to reduce the role of news across Facebook, Instagram, and Threads, and has restricted the algorithmic promotion of political content. The company has also been reducing support for the news industry, not renewing deals worth millions of dollars, and removing its news tab in a number of countries. 1

The impact of these changes, some which have been going on for a while, is illustrated by our first chart which uses aggregated data from 12, mostly developed, markets we have been following since 2014. It shows declining, though still substantial, reach for Facebook over time – down 16pp since 2016 – as well as increased fragmentation of attention across multiple networks. A decade ago, only Facebook and YouTube had a reach of more than 10% for news in these countries, now there are many more networks, often being used in combination (several of them are owned by Meta). Taken together, platforms remain as important as ever – but the role and strategy of individual platforms is changing as they compete and evolve, with Facebook becoming less important, and many others becoming relatively more so.

The previous chart also highlights the strong shift towards video-based networks such as YouTube, TikTok (and Instagram), all of which have grown in importance for news since the COVID-19 pandemic drove new habits. Faced with new competition, both Facebook and X have been refocusing their strategies, looking to keep users within the platform rather than link out to publishers as they might have done in the past. This has involved a prioritisation of video and other proprietary formats. Industry data show that the combined effect of these changes was to reduce traffic referrals from Facebook to publishers by 48% last year and from X by 27%. 2  Looking at survey data across our 47 markets we find much regional and country-based variation in the use of different networks, with the fastest changes in the Global South, perhaps because they tend to be more dependent on social media for news.

TikTok remains most popular with younger groups and, although its use for any purpose is similar to last year, the proportion using it for news has grown to 13% (+2) across all markets and 23% for 18–24s. These averages hide rapid growth in Africa, Latin America, and parts of Asia. More than a third now use the network for news every week in Thailand (39%) and Kenya (36%), with a quarter or more accessing it in Indonesia (29%) and Peru (27%). This compares with just 4% in the UK, 3% in Denmark, and 9% in the United States. The future of TikTok remains uncertain in the US following concerns about Chinese influence and it is already banned in India, though similar apps, such as Moj, Chingari, and Josh, are emerging there.

The growing reach of TikTok and other youth-orientated networks has not escaped the attention of politicians who have incorporated it into their media campaigns. Argentina’s new populist president, Javier Milei, runs a successful TikTok account with 2.2m followers while the new Indonesian president, Prabowo Subianto, swept to victory in February using a social media campaign featuring AI-generated images, rebranding the former hard-line general as a cute and charming dancing grandpa. We explore the implications for trust and reliability of information later in this report.

Shift to video networks brings different dynamics

Traditional social networks such as Facebook and Twitter were originally built around the social graph – effectively this means content posted directly by friends and contacts (connected content). But video networks such as YouTube and TikTok are focused more on content that can be posted by anybody – recommended content that does not necessarily come from accounts users have chosen to follow. 

In previous research ( Digital News Report 2021 , 2023 ) we have shown that when it comes to online news, most audiences still prefer text because of its flexibility and control, but that doesn’t mean that video – and especially short-form video – is not becoming a much bigger part of media diets. Across countries, two-thirds (66%) say they access a short news video, which we defined as a few minutes or less, at least once a week, again with higher levels outside the US and Western Europe. Almost nine in ten of the online population in Thailand (87%), access short-form videos weekly, with half (50%) saying they do this every day. Americans access a little less often (60% weekly and 20% daily), while the British consume the least short-form news (39% weekly and just 9% daily).

Live news streams and long-form recordings are also widely consumed. Taking the United States as an example, we can see how under 35s consume the most of each format, with older people being relatively less likely to consume live or long-form video.

One of the reasons why news video consumption is higher in the United States than in most European countries is the abundant supply of political content from both traditional and non-traditional sources. Some are creators native to online media. Others have come from broadcast backgrounds. In the last few years, a number of high-profile TV anchors, including Megyn Kelly, Tucker Carlson, and Don Lemon, have switched their focus to online platforms as they look to take advantage of changing consumer behaviour. 

Carlson’s interview with Russian president Vladimir Putin received more than 200m plays on X and 34m on his YouTube channel. In the UK, another controversial figure, Piers Morgan, recently left his daily broadcast show on Talk TV in favour of the flexibility and control offered as an independent operator working across multiple streaming platforms. (It is worth noting that many of these platform moves came only after the person in question walked out on or were ditched by their former employers on mainstream TV.)

The jury is currently out on whether these big personalities can build robust traffic or sustainable businesses within platform environments. There is a similar challenge for mainstream publishers who find platform-based videos harder to monetise than those consumed via owned and operated websites and apps. 

YouTube and Facebook remain the most important platforms for online news video overall (see next chart), but we see significant market differences, with Facebook the most popular for video news in the Philippines, YouTube in South Korea, and X and TikTok playing a key role in Nigeria and Indonesia respectively. YouTube is also the top destination for under 25s, though TikTok and Instagram are not far behind.

Older viewers still like to consume much of their video through news websites, though the majority say they mostly access video via third-party platforms. Only in countries such as Norway do we find that getting on for half of users (45%) say their main video consumption is via websites, a reflection of the strength of brands in that market, a commitment to a good user experience, and a strategy that restricts the number of publisher videos that are posted to platforms like Facebook and YouTube. 

Where do people pay attention when using online platforms?

One of the big challenges of the shift to video networks with a younger age profile is that journalists and news organisations are often eclipsed by news creators and other influencers, even when it comes to news.

This year we repeated a question we asked first in 2021 about where audiences pay most attention when it comes to news on various platforms. As in previous years, we find that across markets, while mainstream media and journalists often lead conversations in X and Facebook, they struggle to get as much attention in Instagram, Snapchat, and TikTok where alternative sources and personalities, including online influencers and celebrities, are often more prominent.

It is a similar story across many markets, though differences emerge when we look at specific online networks and at a country level. In the following chart we compare attention around news content on YouTube, the second largest network overall. We find that alternative sources and online influencers play a bigger role in both the United States and Brazil than is the case in the United Kingdom.

But who are these personalities and celebrities and what kind of alternative sources are attracting attention? To answer these questions, we asked respondents that had selected each option to list up to three mainstream accounts they followed most closely and then three alternative ones (e.g. alternative accounts, influencers, etc). We then counted and coded these responses.

In the United States, in particular, we find a wide range of politically partisan voices including Tucker Carlson, Alex Jones (recently reinstated on X), Ben Shapiro, Glenn Beck, and many more. These voices come mostly from the right, with a narrative around a ‘trusted’ alternative to what they see as the biassed liberal mainstream media, but there is also significant representation on the progressive left (David Pakman and commentators from Meidas Touch). The top 10 named individuals in the US list are all men who tend to express strong opinions about politics.

Partisan voices (from both left and right) are an important part of the picture elsewhere, but we also find diverse perspectives and new approaches to storytelling. In France, Hugo Travers, 27, known online as Hugo Décrypte, has become a leading news source for young French people for his explanatory videos about politics (2.6m subscribers on YouTube and 5.8m on TikTok). Our data show that across all networks he gets more mentions than traditional news brands such as Le Monde or BFMTV. According to our data, the average audience age of his followers is just 27, compared to between 40 and 45 for large traditional brands such as Le Monde or BFM TV.

Youth-focused brands Brut and Konbini were also widely cited in France, while in the UK, Politics Joe and TLDR News, set up by Jack Kelly, attract attention for videos that try to make serious topics accessible for young people. The most mentioned TikTok news creator in the UK is Dylan Page, who has more than 10m followers on the platform. In the United States, Vitus Spehar presents a fun daily news round-up, often from a prone position on the floor, @underthedesknews (a satirical dig at the classic TV format).

Youth-based news influencers around the world

youth based news influencers

Coverage of war and conflict

We also found a number of accounts sharing videos about the wars in Gaza and Ukraine. With mainstream news access restricted, young social media influencers in Gaza, Yemen, and elsewhere have been filling in the gaps – documenting the often-brutal realities of life on the ground. Because these videos are posted by many different accounts and ordinary people, it is hard to quantify the impact, but our methodology does pick up a few individual influencer accounts as well as campaigning groups that pull together footage from across social media. As one example, the Instagram account Eye on Palestine appears in our data across a number of countries. The account says it brings ‘the sounds and images that official media does not show’. WarMonitor, one of a number of influential accounts that have been recommended by prominent figures such as Elon Musk, has added hundreds of thousands of followers during the Israel–Palestine conflict.

Eye on Palestine

Finally, celebrities such as Taylor Swift, the Kardashians, and Lionel Messi were widely mentioned by younger people, mostly in reference to Instagram, despite the fact that they rarely talk about politics. This suggests that younger people take a wide view of news, potentially including updates on a singer’s tour dates, on fashion, or on football.

Motivations for using social video

In analysing open comments, we found three core reasons why audiences are attracted to video and other content in social and video platforms.

motivations for using social video

First, respondents, including many younger ones, say the comparatively unfiltered nature of much of the coverage makes it come across as more trustworthy and authentic than traditional media. ‘I like the videos that were taken by an innocent bystander. These videos are unedited and there is no bias or political spin,’ says one. 3  There is an enduring belief that videos are harder to falsify, while enabling people to make up their own mind, even as the development of AI may lead more people to question it.

Secondly, people talk about the convenience of having news served to you on a platform where you already spend time, which knows your interests, and where ‘the algorithm feeds suggestions based on previous viewing’.

Thirdly, social video platforms are valued for the different perspectives they bring. For some people that meant a partisan perspective that aligns with their interests, but for others it related to the greater depth around a personal passion or a wider range of topics to explore.

It is important to note that very few people only use online video for news each week – around 4% across countries according to our data. The majority use a mix of text, video, and audio – and a combination of mainstream brands that may or may not be supplemented by alternative voices. But as audiences consume more content in these networks, they sometimes worry less about where the content comes from, and more about the convenience and choice delivered within their feed. Though there are examples of successful video consumption within news websites and apps, for most publishers the shift towards video presents a difficult balancing act. How can they take advantage of a format that can engage audiences in powerful ways, including younger ones, while developing meaningful relationships – and businesses – on someone else’s platform? 

To what extent do people feel confident about identifying trustworthy news in different online platforms?

In this critical year of elections, many worry about the reliability of content, about the scope for manipulation of online platforms by ‘bad actors’, over how some domestic politicians and media personalities express themselves, and about the opaque ways in which platforms themselves select and promote content. 

Across markets the proportion of our respondents that say they are worried about what is real and what is fake on the internet overall is up 3pp from 56% to 59%. It is highest in some of the countries holding polls this year, including South Africa (81%), the United States (72%), and the UK (70%). Taking a regional view, we find the highest levels of concern in Africa (75%) and lower levels in much of Northern and Western Europe (e.g., Norway 45% and Germany 42%).

Previous research shows that these audience concerns about misinformation are often driven less by news that is completely ‘made up’ and more about seeing opinions and agendas that they may disagree with – as well as journalism they regard as superficial and unsubstantiated. In this context it is perhaps not surprising that politics remains the topic that engenders the most concern about ‘fake or misleading’ content, along with health information and news about the wars in Ukraine and Gaza.

Against this backdrop of widespread concern, we have, for the first time, asked users of specific online platforms, how easy or difficult they find it to distinguish between trustworthy and untrustworthy content. Given its increasing use for news – and its much younger age profile – it is worrying to find that more than a quarter of TikTok users (27%) say they struggle to detect trustworthy news, the highest score out of all the networks covered. A further quarter have no strong opinion and around four in ten (44%) say they find it easy. Fact-checkers and others have been paying much more attention to the network recently, with Newsguard reporting in 2022 that a fifth (20%) of a sample of searches on prominent news topics such as Ukraine and COVID vaccines contained misinformation. 4 Most recently it was at the centre of a flood of unfounded rumours and conspiracies about the Princess of Wales after her hospital operation. A significant proportion of X users (24%) also say that it is hard to pick out trustworthy news. This may be because news plays an outsized role on the platform, or because of the wide range of views expressed, further encouraged by Elon Musk, a self-declared free speech advocate, since he took over the company.

The numbers are only a bit lower in some of the largest networks such as Facebook, Instagram, YouTube, and WhatsApp, which have all been implicated in various misinformation problems too.

While there is widespread concern about different networks, it is also important to recognise that many people are confident about their ability to tell trustworthy and untrustworthy news and information apart. In fact, around half of respondents using each network say they find it easy to do so, including many younger and less educated users – even if these perceptions may or may not be based on reality. All of the major social and video platforms recognise these challenges, and have been boosting their technical and human defences, not least because of the potential for a flood of AI-generated synthetic content in this year’s elections.

In exploring country differences, we find that people in Western European countries such as Germany (see the next chart) are less confident about their ability to distinguish between trustworthy and untrustworthy information on X and TikTok than respondents in the United States. This may reflect very different official and media narratives about the balance between free speech and online harms. The EU has introduced legislation such as the Digital Services Act, imposing greater obligations on platforms in the run-up to June’s EU Parliament elections. 5  X is currently being investigated over suspected breaches of content moderation rules. 

But even within the United States, which has lower concern generally, we find sharp differences based on political beliefs. Amid bitter debates over de-platforming, some voices on the left have been calling for more restrictions and many on the right insisting on even more free speech. We see this political split clearly in the data, especially in terms of attitudes to X and to some extent YouTube.

In our data, people on the left are much more suspicious of content they see in both networks, but other platforms are seen as mostly neutral in this regard. In no other market do we see the same level of polarisation around X, but the same broad left-right dynamics are at play, with the left more uncomfortable about the societal impact of harmful online content.

In some African markets, such as Kenya, we see a significant difference in concern over TikTok compared with other popular networks such as X or WhatsApp, the most used network for news. The app has been labelled ‘a serious threat to the cultural and religious values of Kenya’ in a petition to parliament after being implicated in the sharing of adult content, misinformation, and hate speech. 6 But one other reason for TikTok’s higher score may be because most content there is posted by people they don’t know personally. WhatsApp posts tend to come from a close social circle, who are likely to be more trusted. Paradoxically, this could mean that information spread in WhatsApp carries more danger, because defences may be lower.

Fears around AI and misinformation

The last year has seen an increased incidence of so-called ‘ deepfakes ’, generated by AI including an audio recording falsely purporting to be Joe Biden asking supporters not to vote in a primary, a campaign video containing manipulated photos of Donald Trump, and artificially generated pictures of the war in the Middle East, posted by supporters of both the Palestinian and Israeli sides aimed at winning sympathy for their cause.

AI-generated (fake) pictures from the war have been widely circulated on social media

Examples of AI generated war images

Our qualitative research suggests that, while most people do not think they have personally seen these kinds of synthetic images or videos, some younger, heavy users of social media now think they are coming across them regularly.

In the US some of our participants felt widespread use of generative AI technologies was likely to make detecting misinformation more difficult, especially around important subjects such as politics and elections; others worried about the lack of transparency and the potential for discrimination against minority groups.

Others took a more balanced view, noting that these technologies could be used to provide more relevant and useful content, while also recognising the risks.

Journalistic uses of artificial intelligence

News organisations have reported extensively on the development and impact of AI on society, but they are also starting to adopt these technologies themselves for two key reasons. First, they hope that automating behind-the-scenes processes such as transcription, copy-editing, and layout will substantially reduce costs. Secondly, AI technologies could help to personalise the content itself – making it more appealing for audiences. They need to do this without reducing audience trust, which many believe will become an increasingly critical asset in a world of abundant synthetic media.

In the last year, we have seen media companies deploying a range of AI solutions, with varying degrees of human oversight. Nordic publishers, including Schibsted, now include AI-generated ‘bullet points’ at the top of many of their titles’ stories to increase engagement. One German publisher uses an AI robot named Klara Indernach to write more than 5% of its published stories, 7  while others have deployed tools such as Midjourney or OpenAI’s Dall-E for automating graphic illustrations. Meanwhile, Digital News Report country pages from Indonesia , South Korea , Slovakia , Taiwan , and Mexico , amongst others, reference a range of experimental chatbots and avatars now presenting the news. Nat is one of three AI-generated news readers from Mexico’s Radio Fórmula, used to deliver breaking news and analysis through its website and across social media channels. 8

Nat, one of Radio Fórmula’s AI-generated news readers

AI radio host

Elsewhere we find content farms increasingly using AI to rewrite news, often without permission and with no human checks in the loop. Industry concerns about copyright and about potential mistakes (some of which could be caused by so-called hallucinations) are well documented, but we know less about how audiences feel about these issues and the implications for trust overall. 

Across 28 countries where we included questions, we find our survey respondents to be mostly uncomfortable with the use of AI in situations where content is created mostly by the AI with some human oversight. By contrast, there is less discomfort when AI is used to assist (human) journalists, for example in transcribing interviews or summarising materials for research. Here respondents are broadly more comfortable than uncomfortable.

Our findings, which also show that respondents in the US are significantly more comfortable about different uses of AI than those living in Europe, may be linked to the cues people are getting from the media. British press coverage of AI, for example, has been characterised as overly negative and sensationalist, 9 and UK scores for comfort with less closely monitored use of AI are the lowest in our survey (10%). By contrast, the leading role of US companies and the opportunities for jobs and growth play a bigger part in US media narratives. Across countries, comfort levels are higher with younger groups who are some of the heaviest users of AI tools such as ChatGPT.

Our research also indicates that people who tend to trust the news in general are also more likely to be comfortable with uses of AI where humans (journalists) remain in control, compared with those that don’t. We find comfort gaps ranging from 24 percentage points in the US to 10 percentage points in Mexico. Our qualitative research on AI suggests that trust will be a key issue going forward, with many participants feeling that traditional media have much to lose.

Comfort with AI is also closely related to the importance and seriousness of the subject being discussed. People say they feel less comfortable with AI-generated news on topics such as politics and crime, and more comfortable with sports, arts, or entertainment news, subjects where mistakes tend to have less serious consequences and where there is potentially more value in personalisation of the content. 

While participants were generally more concerned for some topics rather than others, there were some important nuances. For example, some could see the value in using AI to automate local election stories to provide a quicker comprehensive service, as these tended to be fact-based and didn’t involve the AI making political judgements.

Finally, we find that comfort levels about the different uses of AI tend to be higher with people who have read or heard more about it, even if many remain cautious. This suggests that, as people use the technology and find it personally useful , they may take a more balanced view of the risks and the benefits going forward.

Overall, we are still at the early stages of journalists’ usage of AI, but this also makes it a time of maximum risk for news organisations. Our data suggest that audiences are still deeply ambivalent about the use of the technology, which means that publishers need to be extremely cautious about where and how they deploy it. Wider concerns about a flood of synthetic content in online platforms means that trusted brands that use the technologies responsibly could be rewarded, but get things wrong and that trust could be easily lost. 

Gateways to news and the importance of search and aggregator portals

Publishers are not just concerned about falling referrals from social media but also about what might happen with search and other aggregators if chatbot interfaces take off. Google and Microsoft are both experimenting with integrating more direct answers to news queries generated by AI and a range of existing and new mobile apps are also looking to create new experiences that provide answers without requiring a click-through to a publisher.

It is important to note that across all markets, search and aggregators, taken together (33%), are a more important gateway to news than social media (29%) and direct access (22%). A large proportion of mobile alerts (9%) are also generated by aggregators and portals, adding to the concerns about what might happen next.

Unlike social media, search is seen as important across all age groups – 25% of under 35s also prefer to start news journeys with search – and because people are often actively looking for information, the resulting news journey tends to be more valuable for publishers than social fly-by traffic.

Looking at preferred gateways over time we find that search has been remarkably consistent while direct traffic has become less important and social has grown consistently (until this year). Beneath the averages however we do see significant differences across countries. Portals, which often incorporate search engines and mobile apps, are particularly important in parts of Asia. In Japan, Yahoo! News and Line News remain dominant, while local tech giants Naver and Daum are the key access points in South Korea – developing their own AI solutions. In the Czech Republic, Seznam has been an important local search engine, now supplemented with its own news service and also an innovator in AI. Social and video networks tend to be more important in other parts of Asia, as well as Africa and Latin America, but direct traffic still rules in a few parts of Northern Europe where intermediaries have historically played a smaller role. Publishers without regular direct access will be more vulnerable to platform changes and will inevitably find it harder to build subscription businesses. 

Even in countries with relatively strong brands such as the UK, we find significant generational differences when it comes to gateways. Older people are more likely to maintain direct connections, but in the last few years, especially since the COVID-19 pandemic, we have seen both 18–24s and now 25–35s becoming less likely to go directly to a website or app. Across markets we see the same trends with the gap between generations just as significant as country-based differences, if not more.

It is also worth noting the increasing success of mobile aggregators in some countries, many of which are increasingly powered by AI. In the United States, News Break (9%), which was founded by a Chinese tech veteran, has been growing fast with a similar market share to market leader Apple News (11%). In Asian markets, multiple aggregator apps and portals play important gateway and consumption roles, with AI features typically driving ever greater levels of personalisation.

Mobile aggregators tend to be more popular with younger news consumers and are becoming a bigger part of the picture overall, partly fuelled by notifications on relevant topics. In terms of search, there is little evidence that search traffic is drying up and it is certainly not a given that consumers will rush to adopt chatbot interfaces. Even so, publishers expect traffic from search and other gateways to be more unpredictable in the future and will be exploring alternatives with some urgency.

The business of news: subscriptions stalling?

A difficult advertising market, combined with rising costs and the decline in traffic from social media, has put more pressure on the bottom line, especially for publishers that have relied on platform distribution. These factors, together with news about US-based layoffs at the Los Angeles Times , Washington Post , NBC, Business Insider, Wall Street Journal , Condé Nast, and Sports Illustrated, recently led the New Yorker to publish an article titled: ‘Is the Media Prepared for an Extinction-Level Event?’ . The article argued that certain kinds of public interest journalism were now uneconomic and a new, more audience-focused approach was needed.

In this context, and with similar pressures all over the world, we are seeing news media looking to introduce or strengthen reader payment models such as subscription, membership, and donation. Paid models have been a rare bright spot in some of the richer countries in our survey, where publishers still have strong direct connections with readers, but have been difficult to make work elsewhere. As in previous years, our survey shows a significant proportion paying for online news in Norway (40%) and Sweden (31%) and over a fifth in the United States (22%) and Australia (21%), but much lower numbers in Germany (13%), France (11%), Japan (9%), and the UK (8%). There has been very little movement in these top line numbers in the last year. 

Proportion that paid for any online news in the last year Selected countries Via subscription/membership/donation or one-off payment

Map of % paying for news

Across 20 countries, where a significant number of publishers are pushing digital subscriptions, payment levels have almost doubled since 2014 from 10% to 17%, but following a significant bump during the COVID pandemic, growth has slowed. Publishers have already signed up many of those prepared to pay, and converted some of the more intermittent payers to ongoing subscriptions or donations. But amid a cost-of-living crisis, it is proving difficult to persuade most of the public to do the same.

In most countries, we continue to see a ‘winner takes most’ market, with a few upmarket national titles scooping up a big proportion of users. In the United States, for example, the New York Times recently announced that it has over 10m subscribers (including 9.9m digital only) while the Washington Post ’s numbers have reportedly declined. Having said that, we do find a growing minority of countries where people are paying, on average, for more than one publication, including in the United States, Switzerland, Poland, and France (see table below). 

This may be because some publishers in these markets are bundling together titles in an all-access subscription (e.g., New York Times , Schibsted, Amedia, Bonnier, Mediahuis). As one example, Amedia’s +Alt product, which offers 100 newspapers, magazines, and podcasts, now accounts for 10% of Norwegian subscriptions, up 6 percentage points this year.

In Nordic countries, it is worth noting the high proportion of local titles being paid for online. In Canada, Ireland, and Switzerland, a significant proportion of subscriptions are going to foreign publishers.

Heavy discounting persists in most but not all markets

This year we have looked at the price being paid for main news subscriptions in around 20 countries and compared this with the price that the main publications are charging for news. The results show that in the US and UK a large number of people are paying a very small amount (often just a few pounds or dollars), with many likely to be on low-price trials, as we found in last year’s qualitative research. 10  In the next chart we find that well over half of those in the US who are paying for digital news report paying less than the median cost of a main subscription ($16), often much less. By contrast, in Norway, we see a different pattern with fewer people paying a very small amount and a larger number grouped around the median price, which in any case is much higher than in the US (the equivalent of $25).

The reasons for these differences become clearer when we compare the proportion that are paying the full sticker price for each brand . This allows us to estimate the proportion of subscribers in each country that are paying full price and the proportion that may be on a trial or other special deal. Using this methodology, we find significant differences between countries, with more than three-quarters (78%) in Poland paying less than full price, four in ten (46%) in the United States, but fewer in Norway (38%), Denmark (25%), and France (21%). It is not only the case that more people pay for digital news in the Nordic countries. It is also the case that fewer of them are paying a heavily discounted rate, and in Norway the median price is much higher than in other rich countries such as France, the UK, and the US.

We also asked those not currently paying, what might be a fair price, if anything? Across markets just 2% of non-payers say they would pay the equivalent of an average full price subscription, with 55% saying they wouldn’t be prepared to pay anything. That last number is a bit lower in Norway (45%) but considerably higher in the UK (69%) and Germany (68%). In a few markets in the Global South, such as Brazil, we do find more willingness to pay something, but it rarely amounts to more than the equivalent of a few US dollars.

Not every publisher can expect to make reader revenue work, in large part because much of the public basically does not believe news is worth paying for, and continues to have access to plenty of free options from both commercial, non-profit, and in some countries, public service providers. But for others, building digital subscriptions based on distinctive content is the main hope for a sustainable future. Discounting is an important part of persuading new customers to sample the product but publishers will hope that over time, once the habit is created, they can increase prices. It is likely to be a long and difficult road with few winners and many casualties along the way.

Trust levels stable – have we reached the bottom?

There is little evidence that upcoming elections or the increased prevalence of generative AI has so far had any material impact on trust in the news. Across markets, around four in ten (40%) say they trust most news most of the time, the same score as last year. Finland remains the country with the highest levels of trust (69%), Greece and Hungary (23%) have the lowest levels. Morocco, which was included in the survey for the first time, has a relatively low trust rating (31%), compared with countries elsewhere in Africa, a reflection perhaps of the fact that media control is largely in the hands of political and business elites.

Low trust scores in some other countries such as the US (32%), Argentina (30%), and France (31%) can be partly linked to high levels of polarisation and divisive debates over politics and culture.

As always, it is important to underline that our data are based on people’s perceptions of how trustworthy the media, or individual news brands, are. These scores are aggregates of subjective opinions, not an objective measure of underlying trustworthiness, and as our previous work has shown, any year-on-year changes are often at least as much about political and social factors as narrowly about the news itself. 11

This year, we have also been exploring the key factors driving trust or lack of trust in the news media. We find that high standards, a transparent approach, lack of bias, and fairness in terms of media representation are the four primary factors that influence trust. The top responses are strongly linked and are consistent across countries, ages, and political viewpoints. An overly negative or critical approach, which is much discussed by politicians when critiquing the media, is seen as the least important reason in our list, suggesting that audiences still expect journalists to ask the difficult questions.

These results may give a clear steer to media companies on how to build greater trust. Most of the public want news to be accurate, fair, avoid sensationalism, be open about any agendas and biases including lack of diversity, own up to mistakes – and not pull punches when investigating the rich and powerful. People do not necessarily agree on what this looks like in practice, or which individual brands deliver on it. But what they hope news will offer is remarkably similar across many different groups.

Audience interest in transparency and openness seems to chime with some of the ideas behind recent industry initiatives, such as the Trust Project, a non-profit initiative that encourages publishers to reveal more of their workings using so-called ‘trust indicators’, the Journalism Trust Initiative orchestrated by Reporters without Borders, and others. Some large news organisations, such as the BBC, have gone further, creating units or sub-brands that answer audience questions or aim to explain how the news is checked. BBC Verify, launched in May 2023 aims to show and share work behind the scenes to check and verify information, especially images and video content in an era where misinformation has been growing. ‘People want to know not just what we know (and don't know), but how we know it,’ says BBC News CEO Deborah Turness. Leaving aside the risk that journalists and members of the public often mean different things when talking about transparency, with the former focusing on reporting practices, the latter often on their suspicion that ulterior commercial and/or political motives are at play, our data suggest that these initiatives may not work for all audiences. Transparency is considered most important amongst those who already trust the news (84%), but much less for those are generally distrustful (68%) where there is a risk that it hardens the position of those already suspicious of a brand, if they feel that verification will not be equally applied to both sides of an argument. 12 Those that are less interested in the news are also less likely to feel that being transparent about how the news is made is important.

Attention loss, news avoidance, and news fatigue

For several years we have pointed to a number of measures that suggest growing ambivalence about the news, despite – or perhaps because of – the uncertain and chaotic times in which we live. Interest in news continues to fall in some markets, but has stabilised or increased in others, especially those like Argentina and the United States that are going through, or have recently held, elections.

The long-term trend, however, is down in every country apart from Finland, with high interest halving in some countries over the last decade (UK 70% in 2015; 38% in 2024). Women and young people make up a significant proportion of that decline.

While news interest may have stabilised a bit this year, the proportion that say they selectively avoid the news (sometimes or often) is up by 3pp this year to 39% – a full 10pp higher than it was in 2017. Notable country-based rises this year include Ireland (+10pp), Spain (+8pp), Italy (+7pp), Germany (+5pp), Finland (+5pp), the United States (+5pp), and Denmark (+4pp). The underlying reasons for this have not changed. Selective news avoiders say the news media are often repetitive and boring. Some tell us that the negative nature of the news itself makes them feel anxious and powerless.

Selected news avoidance at highest levels recorded All markets

levels of news avoidance

But it is not just that the news can be depressing, it is also relentless. Across markets, the same proportion, around four in ten (39%) say they feel ‘worn out’ by the amount of news these days, up from 28% in 2019, frequently mentioning the way that coverage of wars, disasters, and politics was squeezing out other things. The increase has been greater in Spain (+18), Denmark (+16pp), Brazil (+16pp), Germany (+15pp), South Africa (+12pp), France (+9pp), and the United Kingdom (+8pp), but a little less in the United States (+3pp) where news fatigue was a bigger factor five years ago. There are no significant differences by age or education, though women (43%) are much more likely to complain about news overload than men (34%).

Since we started tracking these issues, usage of smartphones has increased, as has the number of notifications sent from apps of all kinds, perhaps contributing to the sense that the news has become hard to escape. Platforms that require volume of content to feed their algorithms are potentially another factor driving these increases. It was notable that in our industry survey, at the start of 2024, most publishers said they were planning to produce more videos, more podcasts, and more newsletters this year. 13

User needs and information gaps

Industry leaders recognise the twin challenges of news fatigue and news avoidance, especially around long-running stories such as the wars in Ukraine and Gaza. At the same time, disillusion with politics in general may be contributing to declining interest, especially with younger news consumers, as previous reports have shown. Editors are looking for new ways to cover these important stories, by making the news more accessible and engaging – as well as broadening the news agenda but without ‘dumbing down’.

One way in which publishers have been trying to square this circle has been through a ‘user needs’ model, where stories that update people about the latest news are supplemented by commissioning more that educate, inspire, provide perspective, connect, or entertain.

Originally based on audience research at the BBC, the model has been implemented by a number of news organisations around the world. In our survey this year, we asked about eight different needs included in User Needs 2.0, which are nested in four basic needs of knowledge, understanding, feeling, and doing. 14  Our findings show that the three most important user needs globally are staying up to date (‘update me’), learning more (‘educate me’), and gaining varied perspectives (‘give me perspective’). This is pretty consistent across different demographic groups, although the young are a bit more interested in stories that inspire, connect, and entertain when compared with older groups. In the United States, for example, over half (52%) of under 35s think having stories that make them feel better about the world is very or extremely important, compared with around four in ten (43%) of over 35s.

We also asked about how good the media were perceived to be at satisfying each user need. By combining these data with the data on importance, we can create what we call a User Needs Priority Index. This is a form of gap analysis, whereby we take the percentage point gap between the proportion that think a particular need is important and the proportion that think the news media do a good job of providing it and multiply this by the overall importance (as a decimal) to identify the most important gaps. Audiences say, for example, that updating is the most important need, but also think that the media do a good job in this area already. By contrast, there is a much bigger gap in providing different perspectives (e.g. more context, wider set of views) and also around news that ‘makes me feel better about the world’ (offers more hope and optimism).

News organisations may draw different conclusions from these data, depending on their own mission and target audience, but taken as a whole, it is clear news consumers would prefer to dial down the constant updating of news, while dialling up context and wider perspectives that help people better understand the world around them. Most people don’t want the news to be made more entertaining, but they do want more stories that provide more personal utility, help them connect with others, and give people a sense of hope.

Agenda and topic gaps

Adopting a user needs model is one way to address some of the issues that lie behind selective news avoidance and low engagement, but a topic-based lens may also be useful. When looking at levels of interest in different subject areas by age, we find commonalities but also some stark differences. For all age groups, local and international news are considered the most important topics, but there is less consensus around political news. This doesn’t feature in the top five for under-35s but it is a very different story for over-45s where politics remains firmly in the top three. Younger groups are more interested in the environment and climate change, as well as other subjects such as wellness, which are less of a priority for older groups.

If anything, we find even bigger gaps around gender, with men more interested in politics and sport; women more interested in health/wellness and the environment. Much of this is not new but a reminder that older, male-dominated newsrooms may not always be instinctively in tune with the needs of those who don’t look or think like them.

Beyond interest, we also asked respondents to what extent, if at all, they felt their information needs are being met around each of these topics. Across countries we find that most people feel their needs around sport and politics (and often celebrity news) are well served, while there are substantial gaps in some other areas such education, environment, mental health, and social justice.

Local news is a mixed bag. In some countries, including the United States, more than two-thirds (68%) feel that most or all of their needs are being met, despite the loss of many local newspaper titles and journalist jobs over the past decade. Our data suggest that in most countries much of the public does not share the view that there is a crisis of local news – or at least that much of the information they value is being provided by other community actors accessed via search engines or social media.

But in a few countries, notably the UK and Australia, only a little over half say their needs are being met, suggesting that in these countries at least, local news needs are being significantly underserved. These are also countries where local publishers have taken a disproportionate share of job cuts. In countries such as Portugal, Bulgaria, and Japan a higher proportion of unmet needs are largely down to lower interest in local news overall, leaving aside the important role that local news can play in supporting democracy.

Overall, we find clear differences in terms of subject preferences by age and gender which help explain why some groups are engaging less with the news or avoiding it altogether. There is no one-size-fits-all answer to these issues but improving coverage of subjects with higher interest that are currently underserved would be a good starting point.

New formats and the role of audio

Publishers are also exploring different formats as a way of addressing the engagement challenge, especially those that are less immediately reliant on platform algorithms, such as podcasts.

In the last few years, leading publishers such as the New York Times and Schibsted have joined public broadcasters in trying to build their own platforms for distribution to compete with giants like Spotify, using exclusive content or windowing strategies to drive direct traffic. Legacy print publishers have been ramping up their podcast production, finding the combination of text and audio a good fit for specialist journalistic beats, and relatively low cost compared with video. In countries such as the United Kingdom, a strong independent sector is emerging with a range of new launches for politics and economic shows this year, as well as US spin-offs for popular daily podcasts such as the News Agents. Many of the most popular podcasts are now filmed and distributed via video platforms such as YouTube, further blurring the lines between podcasts and video. Across 20 countries where we have been measuring podcast consumption since 2018, just over a third (35%) have accessed one or more podcasts in the last month, but only just over one in ten (13%) regularly use a news one. The share of podcast listening for news shows has remained roughly the same as it was seven years ago.

Podcasts continue to attract younger, richer, and better educated audiences, with news and politics shows heavily skewed towards men, partly due to the dominance of male hosts, as we reported last year. Many markets have become saturated with content, making it hard for new shows to be discovered and also for existing shows to grow audiences.

Conclusions

Our report this year sees news publishers caught in the midst of another set of far-reaching technological and behavioural changes, adding to the pressures on sustainable journalism. But it’s not just news media. The giants of the tech world such as Meta and Google are themselves facing disruption from rivals like Microsoft as well as more agile AI-driven challengers and are looking to maintain their position. In the process, they are changing the way their products work at some pace, with knock-on impacts for an increasingly delicate news ecosystem.

Some kind of platform reset is underway with more emphasis on keeping traffic within their environments and with greater focus on formats proven to drive engagement, such as video. Many newer platforms with younger user bases are far less centred on text and links than incumbent platforms, with content shaped by a multitude of (sometimes hugely popular) creators rather than by established publishers. In some cases, news is being excluded or downgraded because technology companies think it causes more trouble than it is worth. Traffic from social media and search is likely to become more unpredictable over time, but getting off the algorithmic treadmill won’t be easy.

While some media companies continue to perform well in this challenging environment, many others are struggling to convince people that their news is worth paying attention to, let alone paying for. Interest in the news has been falling, the proportion avoiding it has increased, trust remains low, and many consumers are feeling increasingly overwhelmed and confused by the amount of news. Artificial intelligence may make this situation worse, by creating a flood of low-quality content and synthetic media of dubious provenance.

But these shifts also offer a measure of hope that some publishers can establish a stronger position. If news brands are able to show that their journalism is built on accuracy, fairness, and transparency – and that humans remain in control – audiences are more likely to respond positively. Re-engaging audiences will also require publishers to rethink some of the ways that journalism has been practised in the past; to find ways to be more accessible without dumbing down; to report the world as it is whilst also giving hope; to give people different perspectives without turning it into an argument. In a world of superabundant content, success is also likely to be rooted in standing out from the crowd, to be a destination for something that the algorithm and the AI can’t provide while remaining discoverable via many different platforms. Do all that and there is at least a possibility that more people, including some younger ones, will increasingly value and trust news brands once again.

1   https://www.theguardian.com/technology/2024/mar/26/instagram-meta-political-content-opt-in-rules-threads

2   https://reutersinstitute.politics.ox.ac.uk/journalism-media-and-technology-trends-and-predictions-2024

3   While not necessarily a reliable indicator of underlying trustworthiness, such reliance on ‘realism heuristics’ also helps shape often high trust in television news versus other sources.

4   https://www.newsguardtech.com/misinformation-monitor/september-2022/

5   https://www.theguardian.com/media/2024/mar/26/tech-firms-poised-to-mass-hire-factcheckers-before-eu-elections

6   https://www.semafor.com/article/04/19/2024/tiktok-fight-in-kenya

7   https://wan-ifra.org/2023/11/ai-and-robot-writer-klara-key-todumonts-kolner-stadt-anzeiger-mediens-tech-future-as-it-switches-off-its-presses/

8   https://www.d-id.com/resources/case-study/radioformula/

9   https://www.nature.com/articles/s41599-023-02282-w

10   https://reutersinstitute.politics.ox.ac.uk/paying-news-price-conscious-consumers-look-value-amid-cost-living-crisis

11   https://reutersinstitute.politics.ox.ac.uk/trust-news-project

12   https://europeanconservative.com/articles/commentary/whos-verifying-bbc-verify/

13   https://reutersinstitute.politics.ox.ac.uk/journalism-media-and-technology-trends-and-predictions-2024

14   https://smartocto.com/research/userneeds/

signup block

  • Perspectives on trust in news
  • The use of AI in journalism
  • Audiences and user needs
  • How much people pay for news
  • The rise of news influencers
  • Lee en español
  • Country and market data
  • Methodology

IMAGES

  1. Respondents for qualitative data.

    how many respondents should be in qualitative research

  2. How many interviews should you conduct for your qualitative research

    how many respondents should be in qualitative research

  3. Understanding Qualitative Research: An In-Depth Study Guide

    how many respondents should be in qualitative research

  4. 😍 What is respondents in research. Respondents Of The Research And

    how many respondents should be in qualitative research

  5. Qualitative Research Charts

    how many respondents should be in qualitative research

  6. Qualitative Research: Definition, Types, Methods and Examples

    how many respondents should be in qualitative research

VIDEO

  1. Open vs Closed Surveys with Chisquares

  2. Analyzing Qualitative Data: Indepth Interviews and Focus Groups

  3. 3.9 Criteria For Research Quality

  4. Qualitative research Meaning

  5. 3.7 How Many Cases Are Enough

  6. Getting Started qualitative analysis

COMMENTS

  1. Big enough? Sampling in qualitative inquiry

    Any senior researcher, or seasoned mentor, has a practiced response to the 'how many' question. Mine tends to start with a reminder about the different philosophical assumptions undergirding qualitative and quantitative research projects (Staller, 2013).As Abrams (2010) points out, this difference leads to "major differences in sampling goals and strategies."(p.537).

  2. Sample size: how many participants do I need in my research?

    It is the ability of the test to detect a difference in the sample, when it exists in the target population. Calculated as 1-Beta. The greater the power, the larger the required sample size will be. A value between 80%-90% is usually used. Relationship between non-exposed/exposed groups in the sample.

  3. How many participants do I need for qualitative research?

    The answer lies somewhere in between. It's often a good idea (for qualitative research methods like interviews and usability tests) to start with 5 participants and then scale up by a further 5 based on how complicated the subject matter is. You may also find it helpful to add additional participants if you're new to user research or you ...

  4. Sampling

    How many participants you include in your study will vary based on your research design, research question, and sampling approach . Further reading: Babbie, E. (2008). The basics of social research (4th ed). Belmont: Thomson Wadsworth. Creswell, J.W. & Creswell, J.D. (2018). Research design: Qualitative, quantitative and mixed methods ...

  5. PDF Determining the Sample in Qualitative Research

    years particularly in carrying out qualitative research in social sciences. Many scholars have paid attention to the issues of deciding the sufficient sample size in qualitative studies (Barkhuizen, 2014; Blaikie, 2018; Morse, 2000; Wimpenny & Savin-Baden, ... called the 'participants' or 'informants' rather than respondents (Nakkeeran, 2016 ...

  6. Series: Practical guidance to qualitative research. Part 3: Sampling

    This article is the third paper in a series of four articles aiming to provide practical guidance to qualitative research. In an introductory paper, we have described the objective, nature and outline of the Series . Part 2 of the series focused on context, research questions and design of qualitative research . In this paper, Part 3, we ...

  7. What's in a Number? Understanding the Right Sample Size for Qualitative

    Between 15-30. Based on research conducted on this issue, if you are building similar segments within the population, InterQ's recommendation for in-depth interviews is to have a sample size of 15-30. In some cases, a minimum of 10 is sufficient, assuming there has been integrity in the recruiting process. With the goal to maintain a rigorous ...

  8. Qualitative Research Part II: Participants, Analysis, and Quality

    The two texts by Creswell 2008 and 2009 are clear and practical. 1, 2 In 2008, the British Medical Journal offered a series of short essays on qualitative research; the references provided are easily read and digested. 3 -,8 For those wishing to pursue qualitative research in more detail, a suggestion is to start with the appropriate chapters ...

  9. Sample sizes for saturation in qualitative research: A systematic

    Our results echo others, that "rigorously collected qualitative data from small samples can substantially represent the full dimensionality of people's experiences" (Young and Casey, 2019, p.12) and therefore should not be viewed or presented as a limitation when evaluating the rigor of qualitative research.

  10. (PDF) How many participants are necessary for a qualitative study

    Abstract. One of the difficulties associated with qualitative research refers to sample size. Researchers often fail to present a justification for their N and are criticized for that. This ...

  11. Sample Size Policy for Qualitative Studies Using In-Depth Interviews

    There are several debates concerning what sample size is the right size for such endeavors. Most scholars argue that the concept of saturation is the most important factor to think about when mulling over sample size decisions in qualitative research (Mason, 2010).Saturation is defined by many as the point at which the data collection process no longer offers any new or relevant data.

  12. How Many Participants Do I Need? A Guide to Sample Estimation

    More recently, Hagaman and Wutich (2017) explored how many interviews were needed to identify metathemes, or those overarching themes, in qualitative research. In contrast to Guest's (2006) work, and in a very different study, Hagaman and Wutich (2017) found that a larger sample of between 20-40 interviews was necessary to detect those ...

  13. Qualitative Sample Size Calculator

    What is a good sample size for a qualitative research study? Our sample size calculator will work out the answer based on your project's scope, participant characteristics, researcher expertise, and methodology. Just answer 4 quick questions to get a super actionable, data-backed recommendation for your next study.

  14. What is the ideal Sample Size in Qualitative Research?

    Let's explore this whole issue of panel size and what you should be looking for from participant panels when conducing qualitative research. First off, look at quality versus quantity. Most likely, your company is looking for market research on a very specific audience type. B2B decision makers in human resources.

  15. Planning Qualitative Research: Design and Decision Making for New

    While many books and articles guide various qualitative research methods and analyses, there is currently no concise resource that explains and differentiates among the most common qualitative approaches. We believe novice qualitative researchers, students planning the design of a qualitative study or taking an introductory qualitative research course, and faculty teaching such courses can ...

  16. How to choose the right sample size for a qualitative study… and

    The question of how many participants are enough for a qualitative interview is, in my opinion, one of the most difficult questions to find an answer to in the literature. In fact, many authors who set out to find specific guidelines on the ideal sample size in qualitative research in the literature have also concluded that these are ...

  17. Sample size for qualitative research

    In a qualitative research project, how large should the sample be? How many focus group respondents, individual depth interviews (IDIs), or ethnographic observations are needed? We do have some informal rules of thumb.

  18. Characterising and justifying sample size sufficiency in interview

    Sample adequacy in qualitative inquiry pertains to the appropriateness of the sample composition and size.It is an important consideration in evaluations of the quality and trustworthiness of much qualitative research [] and is implicated - particularly for research that is situated within a post-positivist tradition and retains a degree of commitment to realist ontological premises - in ...

  19. Riddle me this: How many interviews (or focus groups) are enough?

    8 interviews to reach 80% saturation (range 5-11) 16 interviews to reach 90% saturation (range 11-26) "But what about focus groups?" you ask. An empirically-based study by Coenen et al. (2012) (gated) found that five focus groups were enough to reach saturation for their inductive thematic analysis.

  20. How many participants do I need in my qualitative research?

    It is all about reaching the point of saturation or the point where you are already getting repetitive responses (You may want to check Egon and Guba, 1985). Over time some researchers say that ...

  21. PDF How many qualitative interviews is enough?

    ts for sample size in qualitative studies. For example, in Social Research Methods, I cite Warren's (2002) suggestion that the minimum number of interviews needs to be between twenty and thirty for an interview-based qualitativ.

  22. Sample Size for Interview in Qualitative Research in Social Sciences: A

    ogeneity of sample composition determines the size of a sample for particular qualitative research. According to Kindsiko & Poltimae (2019) large size of sample size is often found at the expense of homogeneity across the respondents; that means, conducting interviews in different coun. ries, across all levels of organizational hierarchy, and ...

  23. How many participants do we have to include in properly powered

    Given that an effect size of d = .4 is a good first estimate of the smallest effect size of interest in psychological research, we already need over 50 participants for a simple comparison of two within-participants conditions if we want to run a study with 80% power. This is more than current practice. In addition, as soon as a between-groups variable or an interaction is involved, numbers of ...

  24. Qualitative vs Quantitative Research

    Many respondents needed to achieve a representative result: 3. Strengths and weaknesses. When weighing up qualitative vs quantitative research, it's largely a matter of choosing the method appropriate to your research goals. If you're in the position of having to choose one method over another, it's worth knowing the strengths and ...

  25. Full article: How occupational physicians pay attention to the values

    We used an exploratory qualitative research method through in-depth interviews with 10 Dutch occupational physicians. Additionally, two non-participating observations were conducted. ... The respondents were recruited via an invitation email that was sent to all 169 occupational physicians within an occupational health services company (with an ...

  26. Healthcare providers' perception of caring for older patients with

    Basics of Qualitative Research (3rd ed): Techniques and Procedures for Developing Grounded Theory. 2012. Onwuegbuzie AJ, Dickinson WB, Leech NL, Zoran AG. A Qualitative Framework for Collecting and Analyzing Data in Focus Group Research. 2009 [cited 2022 Mar 31];8(3):1-21.

  27. Connecting With Users: Applying Principles Of Communication To UX Research

    In UX research, this could be complex jargon that confuses respondents in a survey, technical issues during a remote usability test, or environmental distractions during an in-person interview. Feedback: The communication received by the receiver, who then provides an output, is called feedback. For example, the responses given by a user during ...

  28. A qualitative exploration of experts' views about multi-dimensional

    In an attempt to fill this gap, the present study explores experts' views about aspects of HS control in Bandar Abbas, a city in the south of Iran. The present qualitative study, conducted in 2022 and 2023, used a content analysis. To this aim, 30 experts in tobacco prevention and control were invited to participate in the research.

  29. Overview and key findings of the 2024 Digital News Report

    The results show that in the US and UK a large number of people are paying a very small amount (often just a few pounds or dollars), with many likely to be on low-price trials, as we found in last year's qualitative research. 10 In the next chart we find that well over half of those in the US who are paying for digital news report paying less ...