Logo for University of Southern Queensland

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

9 Survey research

Survey research is a research method involving the use of standardised questionnaires or interviews to collect data about people and their preferences, thoughts, and behaviours in a systematic manner. Although census surveys were conducted as early as Ancient Egypt, survey as a formal research method was pioneered in the 1930–40s by sociologist Paul Lazarsfeld to examine the effects of the radio on political opinion formation of the United States. This method has since become a very popular method for quantitative research in the social sciences.

The survey method can be used for descriptive, exploratory, or explanatory research. This method is best suited for studies that have individual people as the unit of analysis. Although other units of analysis, such as groups, organisations or dyads—pairs of organisations, such as buyers and sellers—are also studied using surveys, such studies often use a specific person from each unit as a ‘key informant’ or a ‘proxy’ for that unit. Consequently, such surveys may be subject to respondent bias if the chosen informant does not have adequate knowledge or has a biased opinion about the phenomenon of interest. For instance, Chief Executive Officers may not adequately know employees’ perceptions or teamwork in their own companies, and may therefore be the wrong informant for studies of team dynamics or employee self-esteem.

Survey research has several inherent strengths compared to other research methods. First, surveys are an excellent vehicle for measuring a wide variety of unobservable data, such as people’s preferences (e.g., political orientation), traits (e.g., self-esteem), attitudes (e.g., toward immigrants), beliefs (e.g., about a new law), behaviours (e.g., smoking or drinking habits), or factual information (e.g., income). Second, survey research is also ideally suited for remotely collecting data about a population that is too large to observe directly. A large area—such as an entire country—can be covered by postal, email, or telephone surveys using meticulous sampling to ensure that the population is adequately represented in a small sample. Third, due to their unobtrusive nature and the ability to respond at one’s convenience, questionnaire surveys are preferred by some respondents. Fourth, interviews may be the only way of reaching certain population groups such as the homeless or illegal immigrants for which there is no sampling frame available. Fifth, large sample surveys may allow detection of small effects even while analysing multiple variables, and depending on the survey design, may also allow comparative analysis of population subgroups (i.e., within-group and between-group analysis). Sixth, survey research is more economical in terms of researcher time, effort and cost than other methods such as experimental research and case research. At the same time, survey research also has some unique disadvantages. It is subject to a large number of biases such as non-response bias, sampling bias, social desirability bias, and recall bias, as discussed at the end of this chapter.

Depending on how the data is collected, survey research can be divided into two broad categories: questionnaire surveys (which may be postal, group-administered, or online surveys), and interview surveys (which may be personal, telephone, or focus group interviews). Questionnaires are instruments that are completed in writing by respondents, while interviews are completed by the interviewer based on verbal responses provided by respondents. As discussed below, each type has its own strengths and weaknesses in terms of their costs, coverage of the target population, and researcher’s flexibility in asking questions.

Questionnaire surveys

Invented by Sir Francis Galton, a questionnaire is a research instrument consisting of a set of questions (items) intended to capture responses from respondents in a standardised manner. Questions may be unstructured or structured. Unstructured questions ask respondents to provide a response in their own words, while structured questions ask respondents to select an answer from a given set of choices. Subjects’ responses to individual questions (items) on a structured questionnaire may be aggregated into a composite scale or index for statistical analysis. Questions should be designed in such a way that respondents are able to read, understand, and respond to them in a meaningful way, and hence the survey method may not be appropriate or practical for certain demographic groups such as children or the illiterate.

Most questionnaire surveys tend to be self-administered postal surveys , where the same questionnaire is posted to a large number of people, and willing respondents can complete the survey at their convenience and return it in prepaid envelopes. Postal surveys are advantageous in that they are unobtrusive and inexpensive to administer, since bulk postage is cheap in most countries. However, response rates from postal surveys tend to be quite low since most people ignore survey requests. There may also be long delays (several months) in respondents’ completing and returning the survey, or they may even simply lose it. Hence, the researcher must continuously monitor responses as they are being returned, track and send non-respondents repeated reminders (two or three reminders at intervals of one to one and a half months is ideal). Questionnaire surveys are also not well-suited for issues that require clarification on the part of the respondent or those that require detailed written responses. Longitudinal designs can be used to survey the same set of respondents at different times, but response rates tend to fall precipitously from one survey to the next.

A second type of survey is a group-administered questionnaire . A sample of respondents is brought together at a common place and time, and each respondent is asked to complete the survey questionnaire while in that room. Respondents enter their responses independently without interacting with one another. This format is convenient for the researcher, and a high response rate is assured. If respondents do not understand any specific question, they can ask for clarification. In many organisations, it is relatively easy to assemble a group of employees in a conference room or lunch room, especially if the survey is approved by corporate executives.

A more recent type of questionnaire survey is an online or web survey. These surveys are administered over the Internet using interactive forms. Respondents may receive an email request for participation in the survey with a link to a website where the survey may be completed. Alternatively, the survey may be embedded into an email, and can be completed and returned via email. These surveys are very inexpensive to administer, results are instantly recorded in an online database, and the survey can be easily modified if needed. However, if the survey website is not password-protected or designed to prevent multiple submissions, the responses can be easily compromised. Furthermore, sampling bias may be a significant issue since the survey cannot reach people who do not have computer or Internet access, such as many of the poor, senior, and minority groups, and the respondent sample is skewed toward a younger demographic who are online much of the time and have the time and ability to complete such surveys. Computing the response rate may be problematic if the survey link is posted on LISTSERVs or bulletin boards instead of being emailed directly to targeted respondents. For these reasons, many researchers prefer dual-media surveys (e.g., postal survey and online survey), allowing respondents to select their preferred method of response.

Constructing a survey questionnaire is an art. Numerous decisions must be made about the content of questions, their wording, format, and sequencing, all of which can have important consequences for the survey responses.

Response formats. Survey questions may be structured or unstructured. Responses to structured questions are captured using one of the following response formats:

Dichotomous response , where respondents are asked to select one of two possible choices, such as true/false, yes/no, or agree/disagree. An example of such a question is: Do you think that the death penalty is justified under some circumstances? (circle one): yes / no.

Nominal response , where respondents are presented with more than two unordered options, such as: What is your industry of employment?: manufacturing / consumer services / retail / education / healthcare / tourism and hospitality / other.

Ordinal response , where respondents have more than two ordered options, such as: What is your highest level of education?: high school / bachelor’s degree / postgraduate degree.

Interval-level response , where respondents are presented with a 5-point or 7-point Likert scale, semantic differential scale, or Guttman scale. Each of these scale types were discussed in a previous chapter.

Continuous response , where respondents enter a continuous (ratio-scaled) value with a meaningful zero point, such as their age or tenure in a firm. These responses generally tend to be of the fill-in-the blanks type.

Question content and wording. Responses obtained in survey research are very sensitive to the types of questions asked. Poorly framed or ambiguous questions will likely result in meaningless responses with very little value. Dillman (1978) [1] recommends several rules for creating good survey questions. Every single question in a survey should be carefully scrutinised for the following issues:

Is the question clear and understandable ?: Survey questions should be stated in very simple language, preferably in active voice, and without complicated words or jargon that may not be understood by a typical respondent. All questions in the questionnaire should be worded in a similar manner to make it easy for respondents to read and understand them. The only exception is if your survey is targeted at a specialised group of respondents, such as doctors, lawyers and researchers, who use such jargon in their everyday environment. Is the question worded in a negative manner ?: Negatively worded questions such as ‘Should your local government not raise taxes?’ tend to confuse many respondents and lead to inaccurate responses. Double-negatives should be avoided when designing survey questions.

Is the question ambiguous ?: Survey questions should not use words or expressions that may be interpreted differently by different respondents (e.g., words like ‘any’ or ‘just’). For instance, if you ask a respondent, ‘What is your annual income?’, it is unclear whether you are referring to salary/wages, or also dividend, rental, and other income, whether you are referring to personal income, family income (including spouse’s wages), or personal and business income. Different interpretation by different respondents will lead to incomparable responses that cannot be interpreted correctly.

Does the question have biased or value-laden words ?: Bias refers to any property of a question that encourages subjects to answer in a certain way. Kenneth Rasinky (1989) [2] examined several studies on people’s attitude toward government spending, and observed that respondents tend to indicate stronger support for ‘assistance to the poor’ and less for ‘welfare’, even though both terms had the same meaning. In this study, more support was also observed for ‘halting rising crime rate’ and less for ‘law enforcement’, more for ‘solving problems of big cities’ and less for ‘assistance to big cities’, and more for ‘dealing with drug addiction’ and less for ‘drug rehabilitation’. A biased language or tone tends to skew observed responses. It is often difficult to anticipate in advance the biasing wording, but to the greatest extent possible, survey questions should be carefully scrutinised to avoid biased language.

Is the question double-barrelled ?: Double-barrelled questions are those that can have multiple answers. For example, ‘Are you satisfied with the hardware and software provided for your work?’. In this example, how should a respondent answer if they are satisfied with the hardware, but not with the software, or vice versa? It is always advisable to separate double-barrelled questions into separate questions: ‘Are you satisfied with the hardware provided for your work?’, and ’Are you satisfied with the software provided for your work?’. Another example: ‘Does your family favour public television?’. Some people may favour public TV for themselves, but favour certain cable TV programs such as Sesame Street for their children.

Is the question too general ?: Sometimes, questions that are too general may not accurately convey respondents’ perceptions. If you asked someone how they liked a certain book and provided a response scale ranging from ‘not at all’ to ‘extremely well’, if that person selected ‘extremely well’, what do they mean? Instead, ask more specific behavioural questions, such as, ‘Will you recommend this book to others, or do you plan to read other books by the same author?’. Likewise, instead of asking, ‘How big is your firm?’ (which may be interpreted differently by respondents), ask, ‘How many people work for your firm?’, and/or ‘What is the annual revenue of your firm?’, which are both measures of firm size.

Is the question too detailed ?: Avoid unnecessarily detailed questions that serve no specific research purpose. For instance, do you need the age of each child in a household, or is just the number of children in the household acceptable? However, if unsure, it is better to err on the side of details than generality.

Is the question presumptuous ?: If you ask, ‘What do you see as the benefits of a tax cut?’, you are presuming that the respondent sees the tax cut as beneficial. Many people may not view tax cuts as being beneficial, because tax cuts generally lead to lesser funding for public schools, larger class sizes, and fewer public services such as police, ambulance, and fire services. Avoid questions with built-in presumptions.

Is the question imaginary ?: A popular question in many television game shows is, ‘If you win a million dollars on this show, how will you spend it?’. Most respondents have never been faced with such an amount of money before and have never thought about it—they may not even know that after taxes, they will get only about $640,000 or so in the United States, and in many cases, that amount is spread over a 20-year period—and so their answers tend to be quite random, such as take a tour around the world, buy a restaurant or bar, spend on education, save for retirement, help parents or children, or have a lavish wedding. Imaginary questions have imaginary answers, which cannot be used for making scientific inferences.

Do respondents have the information needed to correctly answer the question ?: Oftentimes, we assume that subjects have the necessary information to answer a question, when in reality, they do not. Even if a response is obtained, these responses tend to be inaccurate given the subjects’ lack of knowledge about the question being asked. For instance, we should not ask the CEO of a company about day-to-day operational details that they may not be aware of, or ask teachers about how much their students are learning, or ask high-schoolers, ‘Do you think the US Government acted appropriately in the Bay of Pigs crisis?’.

Question sequencing. In general, questions should flow logically from one to the next. To achieve the best response rates, questions should flow from the least sensitive to the most sensitive, from the factual and behavioural to the attitudinal, and from the more general to the more specific. Some general rules for question sequencing:

Start with easy non-threatening questions that can be easily recalled. Good options are demographics (age, gender, education level) for individual-level surveys and firmographics (employee count, annual revenues, industry) for firm-level surveys.

Never start with an open ended question.

If following a historical sequence of events, follow a chronological order from earliest to latest.

Ask about one topic at a time. When switching topics, use a transition, such as, ‘The next section examines your opinions about…’

Use filter or contingency questions as needed, such as, ‘If you answered “yes” to question 5, please proceed to Section 2. If you answered “no” go to Section 3′.

Other golden rules . Do unto your respondents what you would have them do unto you. Be attentive and appreciative of respondents’ time, attention, trust, and confidentiality of personal information. Always practice the following strategies for all survey research:

People’s time is valuable. Be respectful of their time. Keep your survey as short as possible and limit it to what is absolutely necessary. Respondents do not like spending more than 10-15 minutes on any survey, no matter how important it is. Longer surveys tend to dramatically lower response rates.

Always assure respondents about the confidentiality of their responses, and how you will use their data (e.g., for academic research) and how the results will be reported (usually, in the aggregate).

For organisational surveys, assure respondents that you will send them a copy of the final results, and make sure that you follow up with your promise.

Thank your respondents for their participation in your study.

Finally, always pretest your questionnaire, at least using a convenience sample, before administering it to respondents in a field setting. Such pretesting may uncover ambiguity, lack of clarity, or biases in question wording, which should be eliminated before administering to the intended sample.

Interview survey

Interviews are a more personalised data collection method than questionnaires, and are conducted by trained interviewers using the same research protocol as questionnaire surveys (i.e., a standardised set of questions). However, unlike a questionnaire, the interview script may contain special instructions for the interviewer that are not seen by respondents, and may include space for the interviewer to record personal observations and comments. In addition, unlike postal surveys, the interviewer has the opportunity to clarify any issues raised by the respondent or ask probing or follow-up questions. However, interviews are time-consuming and resource-intensive. Interviewers need special interviewing skills as they are considered to be part of the measurement instrument, and must proactively strive not to artificially bias the observed responses.

The most typical form of interview is a personal or face-to-face interview , where the interviewer works directly with the respondent to ask questions and record their responses. Personal interviews may be conducted at the respondent’s home or office location. This approach may even be favoured by some respondents, while others may feel uncomfortable allowing a stranger into their homes. However, skilled interviewers can persuade respondents to co-operate, dramatically improving response rates.

A variation of the personal interview is a group interview, also called a focus group . In this technique, a small group of respondents (usually 6–10 respondents) are interviewed together in a common location. The interviewer is essentially a facilitator whose job is to lead the discussion, and ensure that every person has an opportunity to respond. Focus groups allow deeper examination of complex issues than other forms of survey research, because when people hear others talk, it often triggers responses or ideas that they did not think about before. However, focus group discussion may be dominated by a dominant personality, and some individuals may be reluctant to voice their opinions in front of their peers or superiors, especially while dealing with a sensitive issue such as employee underperformance or office politics. Because of their small sample size, focus groups are usually used for exploratory research rather than descriptive or explanatory research.

A third type of interview survey is a telephone interview . In this technique, interviewers contact potential respondents over the phone, typically based on a random selection of people from a telephone directory, to ask a standard set of survey questions. A more recent and technologically advanced approach is computer-assisted telephone interviewing (CATI). This is increasing being used by academic, government, and commercial survey researchers. Here the interviewer is a telephone operator who is guided through the interview process by a computer program displaying instructions and questions to be asked. The system also selects respondents randomly using a random digit dialling technique, and records responses using voice capture technology. Once respondents are on the phone, higher response rates can be obtained. This technique is not ideal for rural areas where telephone density is low, and also cannot be used for communicating non-audio information such as graphics or product demonstrations.

Role of interviewer. The interviewer has a complex and multi-faceted role in the interview process, which includes the following tasks:

Prepare for the interview: Since the interviewer is in the forefront of the data collection effort, the quality of data collected depends heavily on how well the interviewer is trained to do the job. The interviewer must be trained in the interview process and the survey method, and also be familiar with the purpose of the study, how responses will be stored and used, and sources of interviewer bias. They should also rehearse and time the interview prior to the formal study.

Locate and enlist the co-operation of respondents: Particularly in personal, in-home surveys, the interviewer must locate specific addresses, and work around respondents’ schedules at sometimes undesirable times such as during weekends. They should also be like a salesperson, selling the idea of participating in the study.

Motivate respondents: Respondents often feed off the motivation of the interviewer. If the interviewer is disinterested or inattentive, respondents will not be motivated to provide useful or informative responses either. The interviewer must demonstrate enthusiasm about the study, communicate the importance of the research to respondents, and be attentive to respondents’ needs throughout the interview.

Clarify any confusion or concerns: Interviewers must be able to think on their feet and address unanticipated concerns or objections raised by respondents to the respondents’ satisfaction. Additionally, they should ask probing questions as necessary even if such questions are not in the script.

Observe quality of response: The interviewer is in the best position to judge the quality of information collected, and may supplement responses obtained using personal observations of gestures or body language as appropriate.

Conducting the interview. Before the interview, the interviewer should prepare a kit to carry to the interview session, consisting of a cover letter from the principal investigator or sponsor, adequate copies of the survey instrument, photo identification, and a telephone number for respondents to call to verify the interviewer’s authenticity. The interviewer should also try to call respondents ahead of time to set up an appointment if possible. To start the interview, they should speak in an imperative and confident tone, such as, ‘I’d like to take a few minutes of your time to interview you for a very important study’, instead of, ‘May I come in to do an interview?’. They should introduce themself, present personal credentials, explain the purpose of the study in one to two sentences, and assure respondents that their participation is voluntary, and their comments are confidential, all in less than a minute. No big words or jargon should be used, and no details should be provided unless specifically requested. If the interviewer wishes to record the interview, they should ask for respondents’ explicit permission before doing so. Even if the interview is recorded, the interviewer must take notes on key issues, probes, or verbatim phrases

During the interview, the interviewer should follow the questionnaire script and ask questions exactly as written, and not change the words to make the question sound friendlier. They should also not change the order of questions or skip any question that may have been answered earlier. Any issues with the questions should be discussed during rehearsal prior to the actual interview sessions. The interviewer should not finish the respondent’s sentences. If the respondent gives a brief cursory answer, the interviewer should probe the respondent to elicit a more thoughtful, thorough response. Some useful probing techniques are:

The silent probe: Just pausing and waiting without going into the next question may suggest to respondents that the interviewer is waiting for more detailed response.

Overt encouragement: An occasional ‘uh-huh’ or ‘okay’ may encourage the respondent to go into greater details. However, the interviewer must not express approval or disapproval of what the respondent says.

Ask for elaboration: Such as, ‘Can you elaborate on that?’ or ‘A minute ago, you were talking about an experience you had in high school. Can you tell me more about that?’.

Reflection: The interviewer can try the psychotherapist’s trick of repeating what the respondent said. For instance, ‘What I’m hearing is that you found that experience very traumatic’ and then pause and wait for the respondent to elaborate.

After the interview is completed, the interviewer should thank respondents for their time, tell them when to expect the results, and not leave hastily. Immediately after leaving, they should write down any notes or key observations that may help interpret the respondent’s comments better.

Biases in survey research

Despite all of its strengths and advantages, survey research is often tainted with systematic biases that may invalidate some of the inferences derived from such surveys. Five such biases are the non-response bias, sampling bias, social desirability bias, recall bias, and common method bias.

Non-response bias. Survey research is generally notorious for its low response rates. A response rate of 15-20 per cent is typical in a postal survey, even after two or three reminders. If the majority of the targeted respondents fail to respond to a survey, this may indicate a systematic reason for the low response rate, which may in turn raise questions about the validity of the study’s results. For instance, dissatisfied customers tend to be more vocal about their experience than satisfied customers, and are therefore more likely to respond to questionnaire surveys or interview requests than satisfied customers. Hence, any respondent sample is likely to have a higher proportion of dissatisfied customers than the underlying population from which it is drawn. In this instance, not only will the results lack generalisability, but the observed outcomes may also be an artefact of the biased sample. Several strategies may be employed to improve response rates:

Advance notification: Sending a short letter to the targeted respondents soliciting their participation in an upcoming survey can prepare them in advance and improve their propensity to respond. The letter should state the purpose and importance of the study, mode of data collection (e.g., via a phone call, a survey form in the mail, etc.), and appreciation for their co-operation. A variation of this technique may be to ask the respondent to return a prepaid postcard indicating whether or not they are willing to participate in the study.

Relevance of content: People are more likely to respond to surveys examining issues of relevance or importance to them.

Respondent-friendly questionnaire: Shorter survey questionnaires tend to elicit higher response rates than longer questionnaires. Furthermore, questions that are clear, non-offensive, and easy to respond tend to attract higher response rates.

Endorsement: For organisational surveys, it helps to gain endorsement from a senior executive attesting to the importance of the study to the organisation. Such endorsement can be in the form of a cover letter or a letter of introduction, which can improve the researcher’s credibility in the eyes of the respondents.

Follow-up requests: Multiple follow-up requests may coax some non-respondents to respond, even if their responses are late.

Interviewer training: Response rates for interviews can be improved with skilled interviewers trained in how to request interviews, use computerised dialling techniques to identify potential respondents, and schedule call-backs for respondents who could not be reached.

Incentives : Incentives in the form of cash or gift cards, giveaways such as pens or stress balls, entry into a lottery, draw or contest, discount coupons, promise of contribution to charity, and so forth may increase response rates.

Non-monetary incentives: Businesses, in particular, are more prone to respond to non-monetary incentives than financial incentives. An example of such a non-monetary incentive is a benchmarking report comparing the business’s individual response against the aggregate of all responses to a survey.

Confidentiality and privacy: Finally, assurances that respondents’ private data or responses will not fall into the hands of any third party may help improve response rates

Sampling bias. Telephone surveys conducted by calling a random sample of publicly available telephone numbers will systematically exclude people with unlisted telephone numbers, mobile phone numbers, and people who are unable to answer the phone when the survey is being conducted—for instance, if they are at work—and will include a disproportionate number of respondents who have landline telephone services with listed phone numbers and people who are home during the day, such as the unemployed, the disabled, and the elderly. Likewise, online surveys tend to include a disproportionate number of students and younger people who are constantly on the Internet, and systematically exclude people with limited or no access to computers or the Internet, such as the poor and the elderly. Similarly, questionnaire surveys tend to exclude children and the illiterate, who are unable to read, understand, or meaningfully respond to the questionnaire. A different kind of sampling bias relates to sampling the wrong population, such as asking teachers (or parents) about their students’ (or children’s) academic learning, or asking CEOs about operational details in their company. Such biases make the respondent sample unrepresentative of the intended population and hurt generalisability claims about inferences drawn from the biased sample.

Social desirability bias . Many respondents tend to avoid negative opinions or embarrassing comments about themselves, their employers, family, or friends. With negative questions such as, ‘Do you think that your project team is dysfunctional?’, ‘Is there a lot of office politics in your workplace?’, ‘Or have you ever illegally downloaded music files from the Internet?’, the researcher may not get truthful responses. This tendency among respondents to ‘spin the truth’ in order to portray themselves in a socially desirable manner is called the ‘social desirability bias’, which hurts the validity of responses obtained from survey research. There is practically no way of overcoming the social desirability bias in a questionnaire survey, but in an interview setting, an astute interviewer may be able to spot inconsistent answers and ask probing questions or use personal observations to supplement respondents’ comments.

Recall bias. Responses to survey questions often depend on subjects’ motivation, memory, and ability to respond. Particularly when dealing with events that happened in the distant past, respondents may not adequately remember their own motivations or behaviours, or perhaps their memory of such events may have evolved with time and no longer be retrievable. For instance, if a respondent is asked to describe his/her utilisation of computer technology one year ago, or even memorable childhood events like birthdays, their response may not be accurate due to difficulties with recall. One possible way of overcoming the recall bias is by anchoring the respondent’s memory in specific events as they happened, rather than asking them to recall their perceptions and motivations from memory.

Common method bias. Common method bias refers to the amount of spurious covariance shared between independent and dependent variables that are measured at the same point in time, such as in a cross-sectional survey, using the same instrument, such as a questionnaire. In such cases, the phenomenon under investigation may not be adequately separated from measurement artefacts. Standard statistical tests are available to test for common method bias, such as Harmon’s single-factor test (Podsakoff, MacKenzie, Lee & Podsakoff, 2003), [3] Lindell and Whitney’s (2001) [4] market variable technique, and so forth. This bias can potentially be avoided if the independent and dependent variables are measured at different points in time using a longitudinal survey design, or if these variables are measured using different methods, such as computerised recording of dependent variable versus questionnaire-based self-rating of independent variables.

  • Dillman, D. (1978). Mail and telephone surveys: The total design method . New York: Wiley. ↵
  • Rasikski, K. (1989). The effect of question wording on public support for government spending. Public Opinion Quarterly , 53(3), 388–394. ↵
  • Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology , 88(5), 879–903. http://dx.doi.org/10.1037/0021-9010.88.5.879. ↵
  • Lindell, M. K., & Whitney, D. J. (2001). Accounting for common method variance in cross-sectional research designs. Journal of Applied Psychology , 86(1), 114–121. ↵

Social Science Research: Principles, Methods and Practices (Revised edition) Copyright © 2019 by Anol Bhattacherjee is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Logo for Mavs Open Press

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

12.1 What is survey research, and when should you use it?

Learning objectives.

Learners will be able to…

  • Distinguish between survey as a research design and questionnaires used to measure concepts
  • Identify the strengths and weaknesses of surveys
  • Evaluate whether survey design fits with their research question

Pre-awareness check (Knowledge)

Have you ever been selected as a participant to complete a survey? How were you contacted? Would you incorporate the researchers’ methods into your research design?

Researchers quickly learn that there is more to constructing a good survey than meets the eye. Survey design takes a great deal of thoughtful planning and often many rounds of revision, but it is worth the effort. As we’ll learn in this section, there are many benefits to choosing survey research as your data collection method. We’ll discuss what a survey is, its potential benefits and drawbacks, and what research projects are the best fit for survey design.

Is survey research right for your project?

what is the importance of survey research design

Questionnaires are completed by individual people, so the unit of observation is almost always individuals, rather than groups or organizations. Generally speaking, individuals provide the most informed data about their own lives and experiences, so surveys often also use individuals as the unit of analysis . Surveys are also helpful in analyzing dyads, families, groups, organizations, and communities, but regardless of the unit of analysis, the unit of observation for surveys is usually individuals.

In some cases, getting the most-informed person to complete the questionnaire may not be feasible . As we discussed in Chapter 2 and Chapter 6 , ethical duties to protect clients and vulnerable community members is important. The ethical supervision needed via the IRB to complete projects that pose significant risks to participants takes time and effort. Sometimes researchers rely on key informants and gatekeepers like clinicians, teachers, and administrators who are less likely to be harmed by the survey. Key informants are people who are especially knowledgeable about the topic. If your study is about nursing, you would probably consider nurses as your key informants. These considerations are more thoroughly addressed in Chapter 10 . Sometimes, participants complete surveys on behalf of people in your target population who are infeasible to survey for some reason. Some examples of key informants include a head of household completing a survey about family finances or an administrator completing a survey about staff morale on behalf of their employees. In this case, the survey respondent is a proxy , providing their best informed guess about the responses other people might have chosen if they were able to complete the survey independently. You are relying on an individual unit of observation (one person filling out a self-report questionnaire) and group or organization unit of analysis (the family or organization the researcher wants to make conclusions about). Proxies are commonly used when the target population is not capable of providing consent or appropriate answers, as in young children.

Proxies are relying on their best judgment of another person’s experiences, and while that is valuable information, it may introduce bias and error into the research process. For instance, If you are planning to conduct a survey of people with second-hand knowledge of your topic, consider reworking your research question to be about something they have more direct knowledge about and can answer easily.

Remember, every project has limitations. Social work researchers look for the most favorable choices in design and methodology, as there are no perfect projects. A missed opportunity is when researchers who want to understand client outcomes (unit of analysis) by surveying practitioners (unit of observation). If a practitioner has a caseload of 30 clients, it’s not really possible to answer a question like “how much progress have your clients made?” on a survey. Would they just average all 30 clients together? Instead, design a survey that asks them about their education, professional experience, and other things they know about first-hand. By making your unit of analysis and unit of observation the same, you can ensure the people completing your survey are able to provide informed answers.

Researchers may introduce measurement error if the person completing the questionnaire does not have adequate knowledge or has a biased opinion about the phenomenon of interest. [ INSERT SOME DISCUSSION HERE, FOR EXAMPLE GALLUP OPINION POLLS, ELECTION POLLING ]

In summary, survey design tends to be used in quantitative research and best fits with research projects that have the following attributes:

  • Researchers plan to collect their own raw data, rather than secondary analysis of existing data.
  • Researchers have access to the most knowledgeable people (that you can feasibly and ethically sample) to complete the questionnaire.
  • Individuals are the unit of observation, and in many cases, the unit of analysis.
  • Researchers will try to observe things objectively and try not to influence participants to respond differently.
  • Research questions asks about indirect observables—things participants can self-report on a questionnaire.
  • There are valid, reliable, and commonly used scales (or other self-report measures) for the variables in the research question.

what is the importance of survey research design

Strengths of survey methods

Researchers employing survey research as a research design enjoy a number of benefits. First, surveys are an excellent way to gather lots of information from many people and is cost-effective due to its potential for generalizability. Related to the benefit of cost-effectiveness is a survey’s potential for generalizability. Because surveys allow researchers to collect data from very large samples for a relatively low cost, survey methods lend themselves to probability sampling techniques, which we discussed in Chapter 10 . When used with probability sampling approaches, survey research is the best method to use when one hopes to gain a representative picture of the attitudes and characteristics of a large group.

Survey research is particularly adept at investigating indirect observables or constructs . Indirect observables (e.g., income, place of birth, or smoking behavior) are things we have to ask someone to self-report because we cannot observe them directly.  Constructs such as people’s preferences (e.g., political orientation), traits (e.g., self-esteem), attitudes (e.g., toward immigrants), or beliefs (e.g., about a new law) are also often best collected through multi-item instruments such as scales. Unlike qualitative studies in which these beliefs and attitudes would be detailed in unstructured conversations, survey design seeks to systematize answers so researchers can make apples-to-apples comparisons across participants. Questionnaires used in survey design are flexible because you can ask about anything, and the variety of questions allows you to expand social science knowledge beyond what is naturally observable.

Survey research also tends to use reliable instruments within their method of inquiry, many scales in survey questionnaires are standardized instruments. Other methods, such as qualitative interviewing, which we’ll learn about in Chapter 18 , do not offer the same consistency that a quantitative survey offers. This is not to say that all surveys are always reliable. A poorly phrased question can cause respondents to interpret its meaning differently, which can reduce that question’s reliability. Assuming well-constructed questions and survey design, one strength of this methodology is its potential to produce reliable results.

The versatility of survey research is also an asset. Surveys are used by all kinds of people in all kinds of professions. They can measure anything that people can self-report. Surveys are also appropriate for exploratory, descriptive, and explanatory research questions (though exploratory projects may benefit more from qualitative methods). Moreover, they can be delivered in a number of flexible ways, including via email, mail, text, and phone. We will describe the many ways to implement a survey later on in this chapter.

In sum, the following are benefits of survey research:

  • Cost-effectiveness
  • Generalizability
  • Reliability
  • Versatility

what is the importance of survey research design

Weaknesses of survey methods

As with all methods of data collection, survey research also comes with a few drawbacks. First, while one might argue that surveys are flexible in the sense that we can ask any kind of question about any topic we want, once the survey is given to the first participant, there is nothing you can do to change the survey without biasing your results. Because surveys want to minimize the amount of influence that a researcher has on the participants, everyone gets the same questionnaire. Let’s say you mail a questionnaire out to 1,000 people and then discover, as responses start coming in, that your phrasing on a particular question seems to be confusing a number of respondents. At this stage, it’s too late for a do-over or to change the question for the respondents who haven’t yet returned their questionnaires. When conducting qualitative interviews or focus groups, on the other hand, a researcher can provide respondents further explanation if they’re confused by a question and can tweak their questions as they learn more about how respondents seem to understand them. Survey researchers often ask colleagues, students, and others to pilot test their questionnaire and catch any errors prior to sending it to participants; however, once researchers distribute the survey to participants, there is little they can do to change anything.

Depth can also be a problem with surveys. Survey questions are standardized; thus, it can be difficult to ask anything other than very general questions that a broad range of people will understand. Because of this, survey results may not provide as detailed of an understanding as results obtained using methods of data collection that allow a researcher to more comprehensively examine whatever topic is being studied. Let’s say, for example, that you want to learn something about voters’ willingness to elect an African American president. General Social Survey respondents were asked, “If your party nominated an African American for president, would you vote for him if he were qualified for the job?” (Smith, 2009). [2] Respondents were then asked to respond either yes or no to the question. But what if someone’s opinion was more complex than could be answered with a simple yes or no? What if, for example, a person was willing to vote for an African American man, but only if that person was a conservative, moderate, anti-abortion, antiwar, etc. Then we would miss out on that additional detail when the participant responded “yes,” to our question. Of course, you could add a question to your survey about moderate vs. radical candidates, but could you do that for all of the relevant attributes of candidates for all people? Moreover, how do you know that moderate or antiwar means the same thing to everyone who participates in your survey? Without having a conversation with someone and asking them follow up questions, survey research can lack enough detail to understand how people truly think.

In sum, potential drawbacks to survey research include the following:

  • Inflexibility
  • Lack of depth
  • Problems specific to cross-sectional surveys, which we will address in the next section.

Secondary analysis of survey data

This chapter is designed to help you conduct your own survey, but that is not the only option for social work researchers. Look back to Chapter 2 and recall our discussion of secondary data analysis . As we talked about previously, using data collected by another researcher can have a number of benefits. Well-funded researchers have the resources to recruit a large representative sample and ensure their measures are valid and reliable prior to sending them to participants. Before you get too far into designing your own data collection, make sure there are no existing data sets out there that you can use to answer your question. We refer you to Chapter 2 for all full discussion of the strengths and challenges of using secondary analysis of survey data.

Key Takeaways

  • Strengths of survey research include its cost effectiveness, generalizability, variety, reliability, and versatility.
  • Weaknesses of survey research include inflexibility and lack of potential depth. There are also weaknesses specific to cross-sectional surveys, the most common type of survey.

TRACK 1 (IF YOU ARE CREATING A RESEARCH PROPOSAL FOR THIS CLASS):

If you are using quantitative methods in a student project, it is very likely that you are going to use survey design to collect your data.

  • Check to make sure that your research question and study fit best with survey design using the criteria in this section
  • Remind yourself of any limitations to generalizability based on your sampling frame.
  • Refresh your memory on the operational definitions you will use for your dependent and independent variables.

TRACK 2 (IF YOU  AREN’T CREATING A RESEARCH PROPOSAL FOR THIS CLASS):

You are interested in understanding more about the needs of unhoused individuals in rural communities, including how these needs vary based on demographic characteristics and personal identities.

  • Develop a working research question for this topic.
  • Using the criteria for survey design described in this section, do you think a survey would be appropriate to answer your research question? Why or why not?
  • What are the potential limitations to generalizability if you select survey design to answer this research question?
  • Unless researchers change the order of questions as part of their methodology and ensuring accurate responses to questions ↵
  • Smith, T. W. (2009). Trends in willingness to vote for a Black and woman for president, 1972–2008.  GSS Social Change Report No. 55 . Chicago, IL: National Opinion Research Center ↵

The use of questionnaires to gather data from multiple participants.

the group of people you successfully recruit from your sampling frame to participate in your study

A research instrument consisting of a set of questions (items) intended to capture responses from participants in a standardized manner

a participant answers questions about themselves

the entities that a researcher actually observes, measures, or collects in the course of trying to learn something about her unit of analysis (individuals, groups, or organizations)

entity that a researcher wants to say something about at the end of her study (individual, group, or organization)

whether you can practically and ethically complete the research project you propose

Someone who is especially knowledgeable about a topic being studied.

a person who completes a survey on behalf of another person

things that require subtle and complex observations to measure, perhaps we must use existing knowledge and intuition to define.

Conditions that are not directly observable and represent states of being, experiences, and ideas.

The degree to which an instrument reflects the true score rather than error.  In statistical terms, reliability is the portion of observed variability in the sample that is accounted for by the true variability, not by error. Note : Reliability is necessary, but not sufficient, for measurement validity.

analyzing data that has been collected by another person or research group

Doctoral Research Methods in Social Work Copyright © by Mavs Open Press. All Rights Reserved.

Share This Book

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology

Research Design | Step-by-Step Guide with Examples

Published on 5 May 2022 by Shona McCombes . Revised on 20 March 2023.

A research design is a strategy for answering your research question  using empirical data. Creating a research design means making decisions about:

  • Your overall aims and approach
  • The type of research design you’ll use
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods
  • The procedures you’ll follow to collect data
  • Your data analysis methods

A well-planned research design helps ensure that your methods match your research aims and that you use the right kind of analysis for your data.

Table of contents

Step 1: consider your aims and approach, step 2: choose a type of research design, step 3: identify your population and sampling method, step 4: choose your data collection methods, step 5: plan your data collection procedures, step 6: decide on your data analysis strategies, frequently asked questions.

  • Introduction

Before you can start designing your research, you should already have a clear idea of the research question you want to investigate.

There are many different ways you could go about answering this question. Your research design choices should be driven by your aims and priorities – start by thinking carefully about what you want to achieve.

The first choice you need to make is whether you’ll take a qualitative or quantitative approach.

Qualitative research designs tend to be more flexible and inductive , allowing you to adjust your approach based on what you find throughout the research process.

Quantitative research designs tend to be more fixed and deductive , with variables and hypotheses clearly defined in advance of data collection.

It’s also possible to use a mixed methods design that integrates aspects of both approaches. By combining qualitative and quantitative insights, you can gain a more complete picture of the problem you’re studying and strengthen the credibility of your conclusions.

Practical and ethical considerations when designing research

As well as scientific considerations, you need to think practically when designing your research. If your research involves people or animals, you also need to consider research ethics .

  • How much time do you have to collect data and write up the research?
  • Will you be able to gain access to the data you need (e.g., by travelling to a specific location or contacting specific people)?
  • Do you have the necessary research skills (e.g., statistical analysis or interview techniques)?
  • Will you need ethical approval ?

At each stage of the research design process, make sure that your choices are practically feasible.

Prevent plagiarism, run a free check.

Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research.

Types of quantitative research designs

Quantitative designs can be split into four main types. Experimental and   quasi-experimental designs allow you to test cause-and-effect relationships, while descriptive and correlational designs allow you to measure variables and describe relationships between them.

With descriptive and correlational designs, you can get a clear picture of characteristics, trends, and relationships as they exist in the real world. However, you can’t draw conclusions about cause and effect (because correlation doesn’t imply causation ).

Experiments are the strongest way to test cause-and-effect relationships without the risk of other variables influencing the results. However, their controlled conditions may not always reflect how things work in the real world. They’re often also more difficult and expensive to implement.

Types of qualitative research designs

Qualitative designs are less strictly defined. This approach is about gaining a rich, detailed understanding of a specific context or phenomenon, and you can often be more creative and flexible in designing your research.

The table below shows some common types of qualitative design. They often have similar approaches in terms of data collection, but focus on different aspects when analysing the data.

Your research design should clearly define who or what your research will focus on, and how you’ll go about choosing your participants or subjects.

In research, a population is the entire group that you want to draw conclusions about, while a sample is the smaller group of individuals you’ll actually collect data from.

Defining the population

A population can be made up of anything you want to study – plants, animals, organisations, texts, countries, etc. In the social sciences, it most often refers to a group of people.

For example, will you focus on people from a specific demographic, region, or background? Are you interested in people with a certain job or medical condition, or users of a particular product?

The more precisely you define your population, the easier it will be to gather a representative sample.

Sampling methods

Even with a narrowly defined population, it’s rarely possible to collect data from every individual. Instead, you’ll collect data from a sample.

To select a sample, there are two main approaches: probability sampling and non-probability sampling . The sampling method you use affects how confidently you can generalise your results to the population as a whole.

Probability sampling is the most statistically valid option, but it’s often difficult to achieve unless you’re dealing with a very small and accessible population.

For practical reasons, many studies use non-probability sampling, but it’s important to be aware of the limitations and carefully consider potential biases. You should always make an effort to gather a sample that’s as representative as possible of the population.

Case selection in qualitative research

In some types of qualitative designs, sampling may not be relevant.

For example, in an ethnography or a case study, your aim is to deeply understand a specific context, not to generalise to a population. Instead of sampling, you may simply aim to collect as much data as possible about the context you are studying.

In these types of design, you still have to carefully consider your choice of case or community. You should have a clear rationale for why this particular case is suitable for answering your research question.

For example, you might choose a case study that reveals an unusual or neglected aspect of your research problem, or you might choose several very similar or very different cases in order to compare them.

Data collection methods are ways of directly measuring variables and gathering information. They allow you to gain first-hand knowledge and original insights into your research problem.

You can choose just one data collection method, or use several methods in the same study.

Survey methods

Surveys allow you to collect data about opinions, behaviours, experiences, and characteristics by asking people directly. There are two main survey methods to choose from: questionnaires and interviews.

Observation methods

Observations allow you to collect data unobtrusively, observing characteristics, behaviours, or social interactions without relying on self-reporting.

Observations may be conducted in real time, taking notes as you observe, or you might make audiovisual recordings for later analysis. They can be qualitative or quantitative.

Other methods of data collection

There are many other ways you might collect data depending on your field and topic.

If you’re not sure which methods will work best for your research design, try reading some papers in your field to see what data collection methods they used.

Secondary data

If you don’t have the time or resources to collect data from the population you’re interested in, you can also choose to use secondary data that other researchers already collected – for example, datasets from government surveys or previous studies on your topic.

With this raw data, you can do your own analysis to answer new research questions that weren’t addressed by the original study.

Using secondary data can expand the scope of your research, as you may be able to access much larger and more varied samples than you could collect yourself.

However, it also means you don’t have any control over which variables to measure or how to measure them, so the conclusions you can draw may be limited.

As well as deciding on your methods, you need to plan exactly how you’ll use these methods to collect data that’s consistent, accurate, and unbiased.

Planning systematic procedures is especially important in quantitative research, where you need to precisely define your variables and ensure your measurements are reliable and valid.

Operationalisation

Some variables, like height or age, are easily measured. But often you’ll be dealing with more abstract concepts, like satisfaction, anxiety, or competence. Operationalisation means turning these fuzzy ideas into measurable indicators.

If you’re using observations , which events or actions will you count?

If you’re using surveys , which questions will you ask and what range of responses will be offered?

You may also choose to use or adapt existing materials designed to measure the concept you’re interested in – for example, questionnaires or inventories whose reliability and validity has already been established.

Reliability and validity

Reliability means your results can be consistently reproduced , while validity means that you’re actually measuring the concept you’re interested in.

For valid and reliable results, your measurement materials should be thoroughly researched and carefully designed. Plan your procedures to make sure you carry out the same steps in the same way for each participant.

If you’re developing a new questionnaire or other instrument to measure a specific concept, running a pilot study allows you to check its validity and reliability in advance.

Sampling procedures

As well as choosing an appropriate sampling method, you need a concrete plan for how you’ll actually contact and recruit your selected sample.

That means making decisions about things like:

  • How many participants do you need for an adequate sample size?
  • What inclusion and exclusion criteria will you use to identify eligible participants?
  • How will you contact your sample – by mail, online, by phone, or in person?

If you’re using a probability sampling method, it’s important that everyone who is randomly selected actually participates in the study. How will you ensure a high response rate?

If you’re using a non-probability method, how will you avoid bias and ensure a representative sample?

Data management

It’s also important to create a data management plan for organising and storing your data.

Will you need to transcribe interviews or perform data entry for observations? You should anonymise and safeguard any sensitive data, and make sure it’s backed up regularly.

Keeping your data well organised will save time when it comes to analysing them. It can also help other researchers validate and add to your findings.

On their own, raw data can’t answer your research question. The last step of designing your research is planning how you’ll analyse the data.

Quantitative data analysis

In quantitative research, you’ll most likely use some form of statistical analysis . With statistics, you can summarise your sample data, make estimates, and test hypotheses.

Using descriptive statistics , you can summarise your sample data in terms of:

  • The distribution of the data (e.g., the frequency of each score on a test)
  • The central tendency of the data (e.g., the mean to describe the average score)
  • The variability of the data (e.g., the standard deviation to describe how spread out the scores are)

The specific calculations you can do depend on the level of measurement of your variables.

Using inferential statistics , you can:

  • Make estimates about the population based on your sample data.
  • Test hypotheses about a relationship between variables.

Regression and correlation tests look for associations between two or more variables, while comparison tests (such as t tests and ANOVAs ) look for differences in the outcomes of different groups.

Your choice of statistical test depends on various aspects of your research design, including the types of variables you’re dealing with and the distribution of your data.

Qualitative data analysis

In qualitative research, your data will usually be very dense with information and ideas. Instead of summing it up in numbers, you’ll need to comb through the data in detail, interpret its meanings, identify patterns, and extract the parts that are most relevant to your research question.

Two of the most common approaches to doing this are thematic analysis and discourse analysis .

There are many other ways of analysing qualitative data depending on the aims of your research. To get a sense of potential approaches, try reading some qualitative research papers in your field.

A sample is a subset of individuals from a larger population. Sampling means selecting the group that you will actually collect data from in your research.

For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

Statistical sampling allows you to test a hypothesis about the characteristics of a population. There are various sampling methods you can use to ensure that your sample is representative of the population as a whole.

Operationalisation means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioural avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalise the variables that you want to measure.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts, and meanings, use qualitative methods .
  • If you want to analyse a large amount of readily available data, use secondary data. If you want data specific to your purposes with control over how they are generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2023, March 20). Research Design | Step-by-Step Guide with Examples. Scribbr. Retrieved 20 March 2024, from https://www.scribbr.co.uk/research-methods/research-design/

Is this article helpful?

Shona McCombes

Shona McCombes

Book cover

Quantitative Methods for the Social Sciences pp 23–35 Cite as

A Short Introduction to Survey Research

  • Daniel Stockemer 2  
  • First Online: 20 November 2018

157k Accesses

1 Citations

This chapter offers a brief introduction into survey research. In the first part of the chapter, students learn about the importance of survey research in the social and behavioral sciences, substantive research areas where survey research is frequently used, and important cross-national survey such as the World Values Survey and the European Social Survey. In the second, I introduce different types of surveys.

  • Survey Research
  • Cross-national Survey
  • Substantive Research Areas
  • World Values Survey
  • Comparative Study Of Electoral Systems (CSES)

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF

Tax calculation will be finalised at checkout

Purchases are for personal use only

In the literature, such reversed causation is often referred to as an endogeneity problem.

Almond, G., & Verba, S. (1963) [1989]. The civic culture: Political attitudes and democracy in five nations. Newbury Park, CA: Sage.

Google Scholar  

Archer, K., & Berdahl, L. (2011). Explorations: Conducting empirical research in canadian political science . Toronto: Oxford University Press.

Behnke, J., Baur, N., & Behnke, N. (2006). Empirische Methoden der Politikwissenschaft . Paderborn: Schöningh.

Brady, H. E., Verba, S., & Schlozman, K. L. (1995). Beyond SES: A resource model of political participation. American Political Science Review, 89 (2), 271–294.

Article   Google Scholar  

Burnham, P., Lutz, G. L., Grant, W., & Layton-Henry, Z. (2008). Research methods in politics (2nd ed.). Basingstoke, Hampshire: Palgrave Macmillan.

Book   Google Scholar  

Converse, J. M. (2011). Survey research in the United States: Roots and emergence 1890–1960 . Picataway: Transaction.

De Leeuw, E. D., Hox, J. J., & Dillman, D. A. (2008). The cornerstones of survey research. In E. D. De Leeuw, J. J. Hox, & D. A. Dillman (Eds.), International handbook of survey methodology . New York: Lawrence Erlbaum Associates.

De Vaus, D. (2001). Research design in social research . London: Sage.

ESS (European Social Survey). (2017). Source questionnaire . Retrieved August 7, 2017, from http://www.europeansocialsurvey.org/methodology/ess_methodology/source_questionnaire/

Fowler, F. J. (2009). Survey research methods (4th ed.). Thousand Oaks, CA: Sage.

Frees, E. W. (2004). Longitudinal and panel data: Analysis and applications in the social sciences . Cambridge: Cambridge University Press.

Hooper, K. (2006). Using William the conqueror’s accounting record to assess manorial efficiency: A critical appraisal. Accounting History, 11 (1), 63–72.

Hurtienne, T., & Kaufmann, G. (2015). Methodological biases: Inglehart’s world value survey and Q methodology . Berlin: Folhas do NAEA.

Loosveldt, G. (2008). Face-to-face interviews . In E. D. De Leeuw, J. J. Hox, & D. A. Dillman (Eds.), International handbook of survey methodology . New York: Lawrence Erlbaum Associates.

Petty, W., & Graunt, J. (1899). The economic writings of Sir William Petty (Vol. 1). London: University Press.

Putnam, R. D. (2001). Bowling alone: The collapse and revival of American community . New York: Simon and Schuster.

Schnell, R., Hill, P. B., & Esser, E. (2011). Methoden der empirischen Sozialforschung (9th ed.). München: Oldenbourg.

Schumann, S. (2012). Repräsentative Umfrage: Praxisorientierte Einführung in empirische Methoden und statistische Analyseverfahren (6th ed.). München: Oldenbourg.

Willcox, W. F. (1934). Note on the chronology of statistical societies. Journal of the American Statistical Association, 29 (188), 418–420.

Wood, E. J. (2003). Insurgent collective action and civil war in El Salvador . Cambridge: Cambridge University Press.

Further Reading

Why do we need survey research.

Converse, J. M. (2017). Survey research in the United States: Roots and emergence 1890–1960. New York: Routledge. This book has more of an historical ankle. It tackles the history of survey research in the United States.

Davidov, E., Schmidt, P., & Schwartz, S. H. (2008). Bringing values back in: The adequacy of the European Social Survey to measure values in 20 countries. Public Opinion Quarterly, 72 (3), 420–445. This rather short article highlights the importance of conducting a large pan-European survey to measure European’s social and political beliefs.

Schmitt, H., Hobolt, S. B., Popa, S. A., & Teperoglou, E. (2015). European parliament election study 2014, voter study. GESIS Data Archive, Cologne. ZA5160 Data file Version , 2 (0). The European Voter Study is another important election study that researchers and students can access freely. It provides a comprehensive battery of variables about voting, political preferences, vote choice, demographics, and political and social opinions of the electorate.

Applied Survey Research

Almond, G. A., & Verba, S. (1963). The civic culture: Political attitudes and democracy in five nations. Princeton: Princeton University Press. Almond’s and Verba’s masterpiece is a seminal work in survey research measuring citizens’ political and civic attitudes in key Western democracies. The book is also one of the first books that systematically uses survey research to measure political traits.

Inglehart, R., & Welzel, C. (2005). Modernization, cultural change, and democracy: The human development sequence . Cambridge: Cambridge University Press. This is an influential book, which uses data from the World Values Survey to explain modernization as a process that changes individual’s values away from traditional and patriarchal values and toward post-materialist values including environmental protection, minority rights, and gender equality.

Download references

Author information

Authors and affiliations.

School of Political Studies, University of Ottawa, Ottawa, Ontario, Canada

Daniel Stockemer

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer International Publishing AG

About this chapter

Cite this chapter.

Stockemer, D. (2019). A Short Introduction to Survey Research. In: Quantitative Methods for the Social Sciences. Springer, Cham. https://doi.org/10.1007/978-3-319-99118-4_3

Download citation

DOI : https://doi.org/10.1007/978-3-319-99118-4_3

Published : 20 November 2018

Publisher Name : Springer, Cham

Print ISBN : 978-3-319-99117-7

Online ISBN : 978-3-319-99118-4

eBook Packages : Social Sciences Social Sciences (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Am J Pharm Educ
  • v.77(1); 2013 Feb 12

The Importance of Survey Research Standards

Jack e. fincham.

a School of Pharmacy, The University of Missouri Kansas City, Kansas City, MO

b Henry W. Bloch School of Management, The University of Missouri Kansas City, Kansas City, MO

Every discipline within fields of research has instituted guidelines and templates for research endeavors and subsequent publications of findings, with the ultimate result being an increase in quality and acceptance by researchers within and across disciplines. These significant efforts are by nature ongoing, as well they should. These enhancements and guideline developments have been instituted in basic science disciplines, clinical pharmacy, and pharmacy administration relevant and related to subsequent scholarly publication of research findings. Specific research endeavors have included bench research, clinical trials and randomized clinical trials, meta analyses, outcomes research, and large scale database analyses. A similar need for quality and standardization also exists for survey research and scholarship. The purpose of this paper is to clarify why this is important and crucial for the Journal and our academy.

INTRODUCTION

In the Research Standards section of Instructions to Authors ( http://archive.ajpe.org/instructions.asp ), the Journal provides guidelines for authors to consider when preparing a manuscript for submission to the Journal . These standards are important for a number of reasons, and may be seen as unique and groundbreaking with regard to other academic health professions journals. This paper is intended to add clarity to this sometimes controversial set of Journal guidelines.

Whether referring to sampling texts such as Cochran’s Sampling Techniques , 3rd edition, 1 or Kish’s Survey Sampling , 2 or using guidelines or tables generated based on these classics as found in Krejcie and Morgan, 3 Salant and Dillman, 4 Bartlett and colleagues, 5 and Dillman, 6 the researcher will find that small populations require a high number of data elements (ie, high response rates) to confidently generalize results because of the potential for sampling error. The recommended minimum sample size for a study depends upon desired confidence level (typically 95%) and how varied the population is with respect to the variable(s) of interest.

Using the conservative approach of a 50/50 split (in other words, an equal chance of one response versus another) on a dichotomous variable of interest at the conventional 95% confidence level for a population of 100, we would need a sample of 80 to ensure a sampling error of no more than +/- 5% at the 95% confidence level. For a population of 100, if a response rate of 50% was achieved for an item with a simple yes/no answer (eg, “Do you have a full-time biostatistician employed by the college?”) and responses were evenly split (50% yes and 50% no), it would not be prudent to extrapolate those findings to the overarching population (100) because the range of possible true percentages would be 25%-75% (that is, all, some, or none of the 50 nonrespondents could have a biostatistician at their college.) 7 (p55)

For a variable with a smaller standard deviation in response to a survey item, say an 80/20 split (eg, 80% agree, 20% disagree), a sample size of only 71 (rather than 80) would be required to maintain the same precision as in the previous example, ie, a 95% confidence level. However, according to Salant and Dillman, 4 (p55) “unless we know the split ahead of time, it is best to be conservative and use 50/50.” Continuous data sets may not require as many data points, however, “if a categorical variable will play a primary role in data analyses…the categorical sample size formulas should be used.” 5(p46) To estimate the sample size required for a continuous variable would necessitate a measure of variability in the population, which may not be easily discerned, thus “the sample size for the proportion is frequently preferred.” 8(p4) As well, “the effect of nonresponse on one variable can be very different than for others in the same survey.” 7 (p54)

Others have simply called for a census in small populations, again necessitating high response rates. 8,9 These considerations supported the rationale for the expectations set forth in the Viewpoint by Fincham. 10

There are 129 doctor of pharmacy degree programs in academic pharmacy in 1 of 3 classifications of accreditation: 109 full accreditation, 15 candidates, and 5 precandidates. 11 The recommended sample size for N=129 at +/- 5% sampling error and 95% confidence level is 97, or a 75% response rate for a 50/50 split. Modeling on a variable with an 80/20 split (ie, less variability in the population) would result in a recommended sample size of 85 or a 66% response rate. Because of the increase in the number of colleges and schools of pharmacy in the United States, the Journal will now accept a 70% response rate threshold for those survey projects collecting data on multiple variable types with the intent of generalizing results to the entire population.

The paper by Draugalis and Plaza 12 provides several examples of the importance of striving for a census and how much confidence readers would have in a published study with a data set with less than optimal response rates, including the annual AACP Faculty Salary Survey. As an example of the potential effects of nonresponse on specific variables in a study, consider the following from a published study on career planning and preparation strategies of pharmacy deans. 13 The subjects were 53 “new” deans with less than 5 years’ experience and 40 “experienced” deans previously in the database with greater than 5 years’ experience, for a cohort of 93 sitting permanent deans (ie, acting and interim deans were excluded) in 2009. Descriptive findings were presented for the total cohort as well as for separate groups on a number of variables when contrasts were desired. “Newly named deans spent an average of 17.1 +/- 8.7 years in the professoriate prior to assuming their first deanship, compared with established deans who had spent an average of 19.0 +/- 5.1 years ( p = 0.006).” If just 3 of the new dean respondents with no or few years in the professoriate had not participated in the study, the mean would have increased to 18.1, the comparison would not have been significant, and an important finding would have been missed. In the career path ladder variable, 9 of the 53 new deans fell in the nontraditional category. If any number of these subjects had actually been nonrespondents, and the closer to actually all 9 of them not participating, this would have skewed descriptive findings and obscured longitudinal comparisons.

High response rates to a research survey do not ensure the validity of the findings as there are other potential sources of error to consider. While attaining a high response rate is a necessary first step, it is not sufficient in and of itself. The specific research question determines the acceptable research methods. For example, in some inquiries, a survey of all colleges and schools of pharmacy may not be necessary or desirable. Depending on the research question, interviews or focus groups may be useful, but the results cannot be generalized to all institutions. Some projects may be intended to gather information only from certain types of institutions, such as private entities, or programs affiliated with a health sciences center. A demonstration project with descriptive findings may be useful to others and in a sense, the argument would be for a methodological development, with the method being generalizable and useful to others, but not the specific institutional findings pertinent to their research. Also, the accepted tools of modeling and decision analytic methods may be appropriate alternatives.

IMPORTANCE OF RESEARCH GUIDELINES AND STANDARDS

In several other research arenas, standards for research methods have been proposed, implemented, and well accepted. Other journals have set standards for research and publications appearing in such. In the 1990s, an international collaboration set in motion a process whereby research standards were developed to enhance the quality and validity of results from clinical trials. A thorough scrutiny of refereed journals accessed through MEDLINE, Embase, Cochrane Central, and associated reference lists was accomplished, and then experts determined the CONSORT checklist, which was subsequently proven to improve the methodology, quality, and external validity aspects of reports of randomized clinical trials. 14,15

Similarly a checklist has been published for qualitative research in hopes of promoting explicit, comprehensive reporting of such research. 16 A Canadian group has proposed developing a survey reporting guideline for health research beginning in 2013 (David Moher, Director, Evidence-based Practice Centre, University of Ottawa, Canada, personal communication, May 17, 2012).

The EQUATOR network (the resource center for good reporting of health research studies) also has been developed to address and make recommendations dealing with the “growing evidence demonstrating widespread deficiencies in the reporting of health research studies.” 17 The EQUATOR Web site provides a list of collected tools and guidelines available for assessing health research issues ( www.equator-network.org ).

Poor reporting guidelines lead to subsequent deficient outcome segments in written summaries of research. Bennett and colleagues have summarized this problem as follows: “There is limited guidance and no consensus regarding the optimal reporting of survey research. As in other areas of research poor reporting compromises both transparency and reliability, which are fundamental tenets of research.” 18 (p.8)

In addressing their concerns over established response rates, Mészáros and colleagues 19 point to the Journal of Dental Education and Academic Medicine as similar publications to the Journal that do not specify response rate criteria. Actually, the issue of response rates has been addressed repeatedly and specifically in these journals. As early as 1983, Creswell and Kuster 20 writing in the Journal of Dental Education noted that at that juncture, 40% of papers published over the previous 5 years were survey studies. Thirty years ago, they called for increased diligence in assessing appropriate sample sizes, adequate attention paid to survey response rates, and greater effort in improving the quality of survey-related research in the Journal of Dental Education.

In 2009, in an excellent analysis of survey research issues in the Journal of Dental Education , Chambers and Licari suggest that: “Evidence that is not grounded in theory is just data. There is a natural pull on the authors of surveys to interpret their findings as supporting policies or positions they favor.” 21(p288) The authors also speak to the importance of adequate response rates: “…that the precision of any claim based on a survey is strongly affected by sample size.” 21(p294) The authors point to sample saturation as a technique to reduce the impact of bias in surveys. This technique directly addresses the response rate issue by noting that the larger the sample size and the higher the response rate, the more accuracy can be attributed to the study results. A built in assumption is that even unknown missing data adversely affect the conclusions of the analyses. Subsequently, even contrary results that may have potentially come from the nonrespondents would result in a less likely scenario. In effect, the results would be different from what was obtained from the analyses of the data in hand.

Response rates matter a great deal, and this point has been made in the Journal of Dental Education over a 30-year period. The issue is not that the Journal of Dental Education has chosen not to develop standards for survey research papers, but rather that the American Journal of Pharmaceutical Education has taken a leadership role in this regard.

Although it is true that Academic Medicine does not explicitly list an acceptable response rate, the October 2011 issue provided summary guidance for survey research published in their journal. 22 In this excellent summary of good research practices relative to survey design and reporting, 5 references are listed. 23-27 These seminal references provide explicit information regarding sampling, research design, response rates and associated problems with biases, and acceptability indices in other components of survey research. In one of these “gold standard” references, Krosnick notes that: “It is important to recognize the inherent limitations of nonprobability sampling methods and to draw conclusions about populations or differences between populations tentatively when nonprobability sampling methods are used.” 25(p541) This point becomes even more significant when low response rates are achieved in nonprobability samples.

GUIDELINES AND STANDARDS AS A QUALITY CONTROL MECHANISM

Setting standards and suggesting guidelines are in no way a move on the part of the Journal editors to stifle research or unfairly limit the reporting of research findings; nor are they intended in any manner to arbitrarily curtail creativity. Many fine survey research papers are published in the Journal and contribute to the academy. There are simply no published studies that have pointed out the negative impact of such standard-setting processes on the research endeavors related to clinical, health services research, or sociological research.

Logo for BCcampus Open Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Chapter 9: Survey Research

Overview of Survey Research

Learning Objectives

  • Define what survey research is, including its two important characteristics.
  • Describe several different ways that survey research can be used and give some examples.

What Is Survey Research?

Survey research  is a quantitative and qualitative method with two important characteristics. First, the variables of interest are measured using self-reports. In essence, survey researchers ask their participants (who are often called respondents  in survey research) to report directly on their own thoughts, feelings, and behaviours. Second, considerable attention is paid to the issue of sampling. In particular, survey researchers have a strong preference for large random samples because they provide the most accurate estimates of what is true in the population. In fact, survey research may be the only approach in psychology in which random sampling is routinely used. Beyond these two characteristics, almost anything goes in survey research. Surveys can be long or short. They can be conducted in person, by telephone, through the mail, or over the Internet. They can be about voting intentions, consumer preferences, social attitudes, health, or anything else that it is possible to ask people about and receive meaningful answers.  Although survey data are often analyzed using statistics, there are many questions that lend themselves to more qualitative analysis.

Most survey research is nonexperimental. It is used to describe single variables (e.g., the percentage of voters who prefer one presidential candidate or another, the prevalence of schizophrenia in the general population) and also to assess statistical relationships between variables (e.g., the relationship between income and health). But surveys can also be experimental. The study by Lerner and her colleagues is a good example. Their use of self-report measures and a large national sample identifies their work as survey research. But their manipulation of an independent variable (anger vs. fear) to assess its effect on a dependent variable (risk judgments) also identifies their work as experimental.

History and Uses of Survey Research

Survey research may have its roots in English and American “social surveys” conducted around the turn of the 20th century by researchers and reformers who wanted to document the extent of social problems such as poverty (Converse, 1987) [1] . By the 1930s, the US government was conducting surveys to document economic and social conditions in the country. The need to draw conclusions about the entire population helped spur advances in sampling procedures. At about the same time, several researchers who had already made a name for themselves in market research, studying consumer preferences for American businesses, turned their attention to election polling. A watershed event was the presidential election of 1936 between Alf Landon and Franklin Roosevelt. A magazine called  Literary Digest  conducted a survey by sending ballots (which were also subscription requests) to millions of Americans. Based on this “straw poll,” the editors predicted that Landon would win in a landslide. At the same time, the new pollsters were using scientific methods with much smaller samples to predict just the opposite—that Roosevelt would win in a landslide. In fact, one of them, George Gallup, publicly criticized the methods of Literary Digest  before the election and all but guaranteed that his prediction would be correct. And of course it was. (We will consider the reasons that Gallup was right later in this chapter.) Interest in surveying around election times has led to several long-term projects, notably the Canadian Election Studies which has measured opinions of Canadian voters around federal elections since 1965.  Anyone can access the data and read about the results of the experiments in these studies.

From market research and election polling, survey research made its way into several academic fields, including political science, sociology, and public health—where it continues to be one of the primary approaches to collecting new data. Beginning in the 1930s, psychologists made important advances in questionnaire design, including techniques that are still used today, such as the Likert scale. (See “What Is a Likert Scale?” in  Section 9.2 “Constructing Survey Questionnaires” .) Survey research has a strong historical association with the social psychological study of attitudes, stereotypes, and prejudice. Early attitude researchers were also among the first psychologists to seek larger and more diverse samples than the convenience samples of university students that were routinely used in psychology (and still are).

Survey research continues to be important in psychology today. For example, survey data have been instrumental in estimating the prevalence of various mental disorders and identifying statistical relationships among those disorders and with various other factors. The National Comorbidity Survey is a large-scale mental health survey conducted in the United States . In just one part of this survey, nearly 10,000 adults were given a structured mental health interview in their homes in 2002 and 2003.  Table 9.1  presents results on the lifetime prevalence of some anxiety, mood, and substance use disorders. (Lifetime prevalence is the percentage of the population that develops the problem sometime in their lifetime.) Obviously, this kind of information can be of great use both to basic researchers seeking to understand the causes and correlates of mental disorders as well as to clinicians and policymakers who need to understand exactly how common these disorders are.

And as the opening example makes clear, survey research can even be used to conduct experiments to test specific hypotheses about causal relationships between variables. Such studies, when conducted on large and diverse samples, can be a useful supplement to laboratory studies conducted on university students. Although this approach is not a typical use of survey research, it certainly illustrates the flexibility of this method.

Key Takeaways

  • Survey research is a quantitative approach that features the use of self-report measures on carefully selected samples. It is a flexible approach that can be used to study a wide variety of basic and applied research questions.
  • Survey research has its roots in applied social research, market research, and election polling. It has since become an important approach in many academic disciplines, including political science, sociology, public health, and, of course, psychology.

Discussion: Think of a question that each of the following professionals might try to answer using survey research.

  • a social psychologist
  • an educational researcher
  • a market researcher who works for a supermarket chain
  • the mayor of a large city
  • the head of a university police force
  • Converse, J. M. (1987). Survey research in the United States: Roots and emergence, 1890–1960 . Berkeley, CA: University of California Press. ↵
  • The lifetime prevalence of a disorder is the percentage of people in the population that develop that disorder at any time in their lives. ↵

A quantitative approach in which variables are measured using self-reports from a sample of the population.

Participants of a survey.

Research Methods in Psychology - 2nd Canadian Edition Copyright © 2015 by Paul C. Price, Rajiv Jhangiani, & I-Chant A. Chiang is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

what is the importance of survey research design

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is a Research Design | Types, Guide & Examples

What Is a Research Design | Types, Guide & Examples

Published on June 7, 2021 by Shona McCombes . Revised on November 20, 2023 by Pritha Bhandari.

A research design is a strategy for answering your   research question  using empirical data. Creating a research design means making decisions about:

  • Your overall research objectives and approach
  • Whether you’ll rely on primary research or secondary research
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods
  • The procedures you’ll follow to collect data
  • Your data analysis methods

A well-planned research design helps ensure that your methods match your research objectives and that you use the right kind of analysis for your data.

Table of contents

Step 1: consider your aims and approach, step 2: choose a type of research design, step 3: identify your population and sampling method, step 4: choose your data collection methods, step 5: plan your data collection procedures, step 6: decide on your data analysis strategies, other interesting articles, frequently asked questions about research design.

  • Introduction

Before you can start designing your research, you should already have a clear idea of the research question you want to investigate.

There are many different ways you could go about answering this question. Your research design choices should be driven by your aims and priorities—start by thinking carefully about what you want to achieve.

The first choice you need to make is whether you’ll take a qualitative or quantitative approach.

Qualitative research designs tend to be more flexible and inductive , allowing you to adjust your approach based on what you find throughout the research process.

Quantitative research designs tend to be more fixed and deductive , with variables and hypotheses clearly defined in advance of data collection.

It’s also possible to use a mixed-methods design that integrates aspects of both approaches. By combining qualitative and quantitative insights, you can gain a more complete picture of the problem you’re studying and strengthen the credibility of your conclusions.

Practical and ethical considerations when designing research

As well as scientific considerations, you need to think practically when designing your research. If your research involves people or animals, you also need to consider research ethics .

  • How much time do you have to collect data and write up the research?
  • Will you be able to gain access to the data you need (e.g., by travelling to a specific location or contacting specific people)?
  • Do you have the necessary research skills (e.g., statistical analysis or interview techniques)?
  • Will you need ethical approval ?

At each stage of the research design process, make sure that your choices are practically feasible.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

what is the importance of survey research design

Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research.

Types of quantitative research designs

Quantitative designs can be split into four main types.

  • Experimental and   quasi-experimental designs allow you to test cause-and-effect relationships
  • Descriptive and correlational designs allow you to measure variables and describe relationships between them.

With descriptive and correlational designs, you can get a clear picture of characteristics, trends and relationships as they exist in the real world. However, you can’t draw conclusions about cause and effect (because correlation doesn’t imply causation ).

Experiments are the strongest way to test cause-and-effect relationships without the risk of other variables influencing the results. However, their controlled conditions may not always reflect how things work in the real world. They’re often also more difficult and expensive to implement.

Types of qualitative research designs

Qualitative designs are less strictly defined. This approach is about gaining a rich, detailed understanding of a specific context or phenomenon, and you can often be more creative and flexible in designing your research.

The table below shows some common types of qualitative design. They often have similar approaches in terms of data collection, but focus on different aspects when analyzing the data.

Your research design should clearly define who or what your research will focus on, and how you’ll go about choosing your participants or subjects.

In research, a population is the entire group that you want to draw conclusions about, while a sample is the smaller group of individuals you’ll actually collect data from.

Defining the population

A population can be made up of anything you want to study—plants, animals, organizations, texts, countries, etc. In the social sciences, it most often refers to a group of people.

For example, will you focus on people from a specific demographic, region or background? Are you interested in people with a certain job or medical condition, or users of a particular product?

The more precisely you define your population, the easier it will be to gather a representative sample.

  • Sampling methods

Even with a narrowly defined population, it’s rarely possible to collect data from every individual. Instead, you’ll collect data from a sample.

To select a sample, there are two main approaches: probability sampling and non-probability sampling . The sampling method you use affects how confidently you can generalize your results to the population as a whole.

Probability sampling is the most statistically valid option, but it’s often difficult to achieve unless you’re dealing with a very small and accessible population.

For practical reasons, many studies use non-probability sampling, but it’s important to be aware of the limitations and carefully consider potential biases. You should always make an effort to gather a sample that’s as representative as possible of the population.

Case selection in qualitative research

In some types of qualitative designs, sampling may not be relevant.

For example, in an ethnography or a case study , your aim is to deeply understand a specific context, not to generalize to a population. Instead of sampling, you may simply aim to collect as much data as possible about the context you are studying.

In these types of design, you still have to carefully consider your choice of case or community. You should have a clear rationale for why this particular case is suitable for answering your research question .

For example, you might choose a case study that reveals an unusual or neglected aspect of your research problem, or you might choose several very similar or very different cases in order to compare them.

Data collection methods are ways of directly measuring variables and gathering information. They allow you to gain first-hand knowledge and original insights into your research problem.

You can choose just one data collection method, or use several methods in the same study.

Survey methods

Surveys allow you to collect data about opinions, behaviors, experiences, and characteristics by asking people directly. There are two main survey methods to choose from: questionnaires and interviews .

Observation methods

Observational studies allow you to collect data unobtrusively, observing characteristics, behaviors or social interactions without relying on self-reporting.

Observations may be conducted in real time, taking notes as you observe, or you might make audiovisual recordings for later analysis. They can be qualitative or quantitative.

Other methods of data collection

There are many other ways you might collect data depending on your field and topic.

If you’re not sure which methods will work best for your research design, try reading some papers in your field to see what kinds of data collection methods they used.

Secondary data

If you don’t have the time or resources to collect data from the population you’re interested in, you can also choose to use secondary data that other researchers already collected—for example, datasets from government surveys or previous studies on your topic.

With this raw data, you can do your own analysis to answer new research questions that weren’t addressed by the original study.

Using secondary data can expand the scope of your research, as you may be able to access much larger and more varied samples than you could collect yourself.

However, it also means you don’t have any control over which variables to measure or how to measure them, so the conclusions you can draw may be limited.

As well as deciding on your methods, you need to plan exactly how you’ll use these methods to collect data that’s consistent, accurate, and unbiased.

Planning systematic procedures is especially important in quantitative research, where you need to precisely define your variables and ensure your measurements are high in reliability and validity.

Operationalization

Some variables, like height or age, are easily measured. But often you’ll be dealing with more abstract concepts, like satisfaction, anxiety, or competence. Operationalization means turning these fuzzy ideas into measurable indicators.

If you’re using observations , which events or actions will you count?

If you’re using surveys , which questions will you ask and what range of responses will be offered?

You may also choose to use or adapt existing materials designed to measure the concept you’re interested in—for example, questionnaires or inventories whose reliability and validity has already been established.

Reliability and validity

Reliability means your results can be consistently reproduced, while validity means that you’re actually measuring the concept you’re interested in.

For valid and reliable results, your measurement materials should be thoroughly researched and carefully designed. Plan your procedures to make sure you carry out the same steps in the same way for each participant.

If you’re developing a new questionnaire or other instrument to measure a specific concept, running a pilot study allows you to check its validity and reliability in advance.

Sampling procedures

As well as choosing an appropriate sampling method , you need a concrete plan for how you’ll actually contact and recruit your selected sample.

That means making decisions about things like:

  • How many participants do you need for an adequate sample size?
  • What inclusion and exclusion criteria will you use to identify eligible participants?
  • How will you contact your sample—by mail, online, by phone, or in person?

If you’re using a probability sampling method , it’s important that everyone who is randomly selected actually participates in the study. How will you ensure a high response rate?

If you’re using a non-probability method , how will you avoid research bias and ensure a representative sample?

Data management

It’s also important to create a data management plan for organizing and storing your data.

Will you need to transcribe interviews or perform data entry for observations? You should anonymize and safeguard any sensitive data, and make sure it’s backed up regularly.

Keeping your data well-organized will save time when it comes to analyzing it. It can also help other researchers validate and add to your findings (high replicability ).

On its own, raw data can’t answer your research question. The last step of designing your research is planning how you’ll analyze the data.

Quantitative data analysis

In quantitative research, you’ll most likely use some form of statistical analysis . With statistics, you can summarize your sample data, make estimates, and test hypotheses.

Using descriptive statistics , you can summarize your sample data in terms of:

  • The distribution of the data (e.g., the frequency of each score on a test)
  • The central tendency of the data (e.g., the mean to describe the average score)
  • The variability of the data (e.g., the standard deviation to describe how spread out the scores are)

The specific calculations you can do depend on the level of measurement of your variables.

Using inferential statistics , you can:

  • Make estimates about the population based on your sample data.
  • Test hypotheses about a relationship between variables.

Regression and correlation tests look for associations between two or more variables, while comparison tests (such as t tests and ANOVAs ) look for differences in the outcomes of different groups.

Your choice of statistical test depends on various aspects of your research design, including the types of variables you’re dealing with and the distribution of your data.

Qualitative data analysis

In qualitative research, your data will usually be very dense with information and ideas. Instead of summing it up in numbers, you’ll need to comb through the data in detail, interpret its meanings, identify patterns, and extract the parts that are most relevant to your research question.

Two of the most common approaches to doing this are thematic analysis and discourse analysis .

There are many other ways of analyzing qualitative data depending on the aims of your research. To get a sense of potential approaches, try reading some qualitative research papers in your field.

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

A research design is a strategy for answering your   research question . It defines your overall approach and determines how you will collect and analyze data.

A well-planned research design helps ensure that your methods match your research aims, that you collect high-quality data, and that you use the right kind of analysis to answer your questions, utilizing credible sources . This allows you to draw valid , trustworthy conclusions.

Quantitative research designs can be divided into two main categories:

  • Correlational and descriptive designs are used to investigate characteristics, averages, trends, and associations between variables.
  • Experimental and quasi-experimental designs are used to test causal relationships .

Qualitative research designs tend to be more flexible. Common types of qualitative design include case study , ethnography , and grounded theory designs.

The priorities of a research design can vary depending on the field, but you usually have to specify:

  • Your research questions and/or hypotheses
  • Your overall approach (e.g., qualitative or quantitative )
  • The type of design you’re using (e.g., a survey , experiment , or case study )
  • Your data collection methods (e.g., questionnaires , observations)
  • Your data collection procedures (e.g., operationalization , timing and data management)
  • Your data analysis methods (e.g., statistical tests  or thematic analysis )

A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

In statistics, sampling allows you to test a hypothesis about the characteristics of a population.

Operationalization means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioral avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalize the variables that you want to measure.

A research project is an academic, scientific, or professional undertaking to answer a research question . Research projects can take many forms, such as qualitative or quantitative , descriptive , longitudinal , experimental , or correlational . What kind of research approach you choose will depend on your topic.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, November 20). What Is a Research Design | Types, Guide & Examples. Scribbr. Retrieved March 22, 2024, from https://www.scribbr.com/methodology/research-design/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, guide to experimental design | overview, steps, & examples, how to write a research proposal | examples & templates, ethical considerations in research | types & examples, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

Study Site Homepage

  • Request new password
  • Create a new account

A Practical Introduction to Survey Design: A Beginner's Guide

Student resources, 3. survey design and the research process, how are the research process and survey design process related.

The survey design process is related to the research process through being a key research method through which data is collected to answer research questions identified as a part of the research process.

What is the relevance of topics to survey design?

Topics are the broad areas of research that a survey designer will investigate and work on to refine in preparation for researching via a survey. Topics are the entry point to survey design and contain theories and perspectives. Studies associated with them entail various methodological approaches reflecting the research on them to a point in time.

How and why do researchers refine topics into discreet research questions?

Researchers must refine topics to make research projects manageable and doable. Given how broad a topic is, a researcher will need to identify and select specific aspects of a topic from a field or perspective. A researcher will approach a topic and then ask themselves a series of related questions about what they really want to know about a topic and then refine the topic into a series of discreet and precise research questions.

Why is theory considered to be important to survey design?

Theory is important to survey design because theories are central to any academic discipline to explain why or something occurs. Theories are composed of concepts, ideas, and propositions that can be measured to substantiate or question a theory.

What are the kinds of research questions best suited to research using survey design and why?

Descriptive, relational/explanatory, and causal/associative/correlative questions. These questions are best suited to survey design because they are most appropriate to the logic of survey data.

What is the relational logic between variables in survey research and how is it related to research questions and theories?

In assessing a relationship between variables, the relational logic between variables in survey research is as one variable (independent variable) increases or decreases or is differentiated across its categories, so too will a dependent variable increase or decrease or be differentiated across its categories. This relationship will reflect what is proposed in a theory and be represented by research questions.

What are the different forms of research design and how are surveys used in them?

Experimental/evaluative, cross-section, longitudinal, case study. Surveys are used in these designs in similar ways but the aims of the surveys may be different in each case, especially for an evaluative survey that will be focused on a particular aim.

What Is A Survey And Why Does Survey Research Matter?

What Is A Survey And Why Does Survey Research Matter?

There’s a saying “If you want to know about something, just ask”. The only way to find out what people are thinking is to ask them, unless of course if you are a mind reader. That’s what surveys are for, to collect all the data and information that is sought. Surveys can be conducted in various forms but the most commonly used method is online surveys.

Whether you’re interested in knowing what your customers think about the product or how many people will watch the T20 World Cup 2021, conducting surveys is the best and most effective way to collect insightful information.

Survey seems like a simple term but when we dive into the process, it becomes more complex. The way the question is asked often determines the kind of answer you get back. Hence the first decision you have to make is: are you going to ask an open-ended or a closed-ended question? This will depend on the type of information to be collected,  plan which questions to ask, how to ask these questions and by whom it should be answered.

We, at Conclave Research, endeavor to build and design your surveys in the most effective and efficient manner. In this blog, we will discuss what is a survey and why does survey research matter?

What is Survey?

Survey is a research method that involves collecting information about people’s preferences, behaviors and thoughts. Researchers can conduct surveys in multiple ways depending on the chosen study objective. There are numerous survey research methods, such as Interviews, Focus Groups, Mailed Questionnaires, Online questionnaires, and much more. Usually, but not necessarily the information is mostly collected through interviews and questionnaires.

What is an Online Survey?

An online survey is the most effective data collection process that is conducted over the internet where the target audience completes a structured questionnaire, more often by filling out a form. Online surveys are considered as the most efficient way to conduct the survey as it is easy to reach out to the target audience and also it is less time consuming than the traditional method. The information collected through online surveys can be stored in the database which can easily be assessed by the researcher experts.

Why does survey research matter?

The most major and essential reason for conducting research surveys is to uncover answers to the most specific, important questions. To collect the most meaningful information, hence it’s very important to design the survey questionnaire in the best possible way.

What Is A Survey And Why Does Survey Research Matter

Here are major reasons why organizations should conduct survey research:

A medium for discussion

Surveys are the most effective means for respondents to discuss important key topics such as the quality of product or customer service etc. It’s important to communicate with the respondents about the research topic, maybe by an open-ended question where the respondents can describe it in a brief manner.

Understand respondents to uncover the answers

In survey research , respondents can easily provide meaningful insights about their likes and dislike about a particular product or service, and also the feedback for improvement. Researchers must secure these responses and should utilize the responses in the best possible way. This keeps the respondent motivated. Methods like mobile surveys, online surveys or paper surveys are proved to be more effective than telephone surveys, or face-to-face surveys.

Strategy for comparing results

Survey research provides a snapshot of people’s behavior, thoughts and preferences. Researchers should measure this valuable feedback to improve the product/services and establish it as a baseline to compare results over time.

Are you looking for a survey platform with an effective support team – get in touch and our team will turn your surveys into a reliable source of insightful information.

  • What is Project Management & Benefits of Project Management Services? 
  • Role of Survey Programming In Market Research

1 thought on “ What Is A Survey And Why Does Survey Research Matter? ”

[…] and generating meaningful insights out of it is known as ‘Survey Programming’. While conducting surveys, data gets collected through various sources. A robust survey software collects the data easily and […]

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Recent Posts

  • How to leverage and implement open-end questions
  • The Power of Effective Survey Programming for market research projects
  • The Future of Data Processing: Exploring the Latest Trends and Techniques
  • Top Insights For Project Management In Market Research

Recent Comments

  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • Data Collection Blog
  • Market Research Consultant
  • Online Survey Programming Blog
  • Project Management Blog
  • Uncategorized

  • Foundations
  • Write Paper

Search form

  • Experiments
  • Anthropology
  • Self-Esteem
  • Social Anxiety

what is the importance of survey research design

Survey Research Design

The survey research design is often used because of the low cost and easy accessible information.

This article is a part of the guide:

  • Research Designs
  • Quantitative and Qualitative Research
  • Literature Review
  • Quantitative Research Design

Browse Full Outline

  • 1 Research Designs
  • 2.1 Pilot Study
  • 2.2 Quantitative Research Design
  • 2.3 Qualitative Research Design
  • 2.4 Quantitative and Qualitative Research
  • 3.1 Case Study
  • 3.2 Naturalistic Observation
  • 3.3 Survey Research Design
  • 3.4 Observational Study
  • 4.1 Case-Control Study
  • 4.2 Cohort Study
  • 4.3 Longitudinal Study
  • 4.4 Cross Sectional Study
  • 4.5 Correlational Study
  • 5.1 Field Experiments
  • 5.2 Quasi-Experimental Design
  • 5.3 Identical Twins Study
  • 6.1 Experimental Design
  • 6.2 True Experimental Design
  • 6.3 Double Blind Experiment
  • 6.4 Factorial Design
  • 7.1 Literature Review
  • 7.2 Systematic Reviews
  • 7.3 Meta Analysis

what is the importance of survey research design

Introduction

Conducting accurate and meaningful surveys is one of the most important facets of market research in the consumer driven 21st century.

Businesses, governments and media spend billions of dollars on finding out what people think and feel.

Accurate research can generate vast amounts of revenue; bad or inaccurate research can cost millions, or even bring down governments.

The survey research design is a very valuable tool for assessing opinions and trends. Even on a small scale, such as local government or small businesses, judging opinion with carefully designed surveys can dramatically change strategies.

Television chat-shows and newspapers are usually full of facts and figures gleaned from surveys but often no information is given as to where this information comes from or what kind of people were asked.

A cursory examination of these figures usually shows that the results of these surveys are often manipulated or carefully sifted to try and reflect distort the results to match the whims of the owners.

Businesses are often guilty of carefully selecting certain results to try and portray themselves as the answer to all needs.

When you decide to enter this minefield and design a survey, how do you avoid falling into the trap of inaccuracy and bias ? How do you ensure that your survey research design reflects the views of a genuine cross-section of the population?

The simple answer is that you cannot; even with unlimited budget, time and resources, there is no way of achieving 100% accuracy. Opinions, on all levels, are very fluid and can change on a daily or even hourly basis.

Questionnaire

Despite this, surveys are still a powerful tool and can be an extremely powerful research tool. As long as you design your survey well and are prepared to be self-critical, you can still obtain an accurate representation of opinion.

what is the importance of survey research design

Establishing the Aims of Your Research

This is the single most important step of your survey research design and can make or break your research; every single element of your survey must refer back to this design or it will be fatally flawed.

If your research is too broad, you will have to ask too many questions ; too narrow and you will not be researching the topic thoroughly enough.

Researching and Determining Your Sample Group

This is the next crucial step in determining your survey and depends upon many factors.

The first is accuracy; you want to try and interview as broad a base of people as possible. Quantity is not always the answer; if you were researching a detergent, for example, you would want to target your questions at those who actually use such products.

For a political or ethical survey, about which anybody can have a valid opinion, you want to try and represent a well balanced cross section of society.

It is always worth checking beforehand what quantity and breadth of response you need to provide significant results or your hard work may be in vain.

Before you start the planning, it is important that you consult somebody about the statistical side of your survey research design. This way, you know what number and type of responses you need to make it a valid survey and prevent inaccurate results.

Methodology

How do you make sure that your questionnaire reaches the target group? There are many methods of reaching people but all have advantages and disadvantages.

For a college or university study it is unlikely that you will have the facilities to use internet, e-mail or phone surveying so we will concentrate on only the likely methods you will use.

Face to Face

This is probably the most traditional method of the survey research design. It can be very accurate. It allows you to be selective about to whom you ask questions and you can explain anything that they do not understand.

In addition, you can make a judgment about who you think is wasting your time or giving stupid answers.

There are a few things to be careful of with this approach; firstly, people can be reluctant to give up their time without some form of incentive.

Another factor to bear in mind is that is difficult to ask personal questions face to face without embarrassing people. It is also very time consuming and difficult to obtain a representative sample.

Finally, if you are going to be asking questions door-to-door, it is essential to ensure that you have some official identification to prove who you are.

This does not necessarily mean using the postal service; putting in the legwork and delivering questionnaires around a campus or workplace is another method.

This is a good way of targeting a certain section of people and is excellent if you need to ask personal or potentially embarrassing questions.

The problems with this method are that you cannot be sure of how many responses you will receive until a long time period has passed.

You must also be wary of collecting personal data; most countries have laws about how much information you can keep about people so it is always wise to check with somebody more knowledgeable.

Structuring and Designing the Questionnaire

The design of your questionnaire depends very much upon the type of survey and the target audience.

If you are asking questions face to face it is easy to explain if people are unsure of a question. On the other hand, if your questionnaire is going to include many personal questions then mailing methods are preferable (but may violate local legislation).

You must keep your questionnaire as short as possible; people will either refuse to fill in a long questionnaire or get bored halfway through.

If you do have lots of information then it may be preferable to offer multiple-choice or rating questions to make life easier.

It is also polite, especially with mailed questionnaires, to send a short cover note explaining what you are doing and how the subject should return the surveys to you.

You should introduce yourself; explain why you are doing the research, what will happen with the results and who to contact if the subject has any queries.

Types of Question

Multiple choice questions allow many different answers, including don't know, to be assessed. The main strength of this type of question is that the form is easy to fill in and the answers can be checked easily and quantitatively ; this is useful for large sample groups.

Rating, on some scale, is a tried and tested form of question structure. This way is very useful when you are seeking to be a little more open-ended than is possible with multiple choice questions. It is a little harder to analyze your responses. It is important to make sure that the scale allows extreme views.

Questions asking for opinions must be open-ended and allow the subject to give their own response; you should avoid entrapment and appear to be as neutral as possible during the procedure. The major problem is that you have to devise a numerical way of analyzing and statistically evaluating the responses which can lead to a biased view, if care is not taken. These types of question should really be reserved for experienced researchers.

The order in which you ask the questions can be important. Try to start off with the most relevant questions first. Also friendly and non-threatening questions put the interviewee at ease. Questions should be simple and straightforward using everyday language rather than perfect grammar.

Try and group questions about similar topics together; this makes it a lot quicker for people to answer questions more quickly and easily.

Some researchers advocate mixing up and randomizing questions for accuracy but this approach tends to be more appropriate for advanced market research. For this type of survey the researcher is trying to disguise the nature of the research and filter out preconceptions.

It is also a good idea to try out a test survey; ask a small group to give genuine and honest feedback so that you can make adjustments.

Common mistakes when doing the survey research design.

Analyzing Your Results

This is where the fun starts and it will depend upon the type of questions used.

For multiple choice questions it is a matter of counting up the answers to each question and using statistics to ‘crunch the numbers' and test relevance.

Rating type questions require a little more work but they follow broadly the same principle.

For opinion questions you have to devise some way of judging the responses numerically.

The next step is to devise which statistical test you are going to use and start to enter some numbers to judge the significance of your data.

Conclusions

This is where you have to analyze the results. Be self critical whether your results showed what you expected or not. Any survey has flaws in its method so it is always a good idea to show that you are aware of these.

For example, a university represents only a narrow cross section of society; as long as you are aware of this then your results are valid. If your survey gave unexpected results explain the possible reasons for why this happened and suggestions for refining the techniques and structure of your survey next time.

As long as you have justified yourself and pointed out your own shortcomings then your results will be relevant and you should receive a good result.

  • Psychology 101
  • Flags and Countries
  • Capitals and Countries

Martyn Shuttleworth (Jul 5, 2008). Survey Research Design. Retrieved Mar 22, 2024 from Explorable.com: https://explorable.com/survey-research-design

You Are Allowed To Copy The Text

The text in this article is licensed under the Creative Commons-License Attribution 4.0 International (CC BY 4.0) .

This means you're free to copy, share and adapt any parts (or all) of the text in the article, as long as you give appropriate credit and provide a link/reference to this page.

That is it. You don't need our permission to copy the article; just include a link/reference back to this page. You can use it freely (with some kind of link), and we're also okay with people reprinting in publications like books, blogs, newsletters, course-material, papers, wikipedia and presentations (with clear attribution).

Related articles

Descriptive Research Design

Choosing Scientific Measurements

Want to stay up to date? Follow us!

Get all these articles in 1 guide.

Want the full version to study at home, take to school or just scribble on?

Whether you are an academic novice, or you simply want to brush up your skills, this book will take your academic writing skills to the next level.

what is the importance of survey research design

Download electronic versions: - Epub for mobiles and tablets - PDF version here

Save this course for later

Don't have time for it all now? No problem, save it as a course and come back to it later.

Footer bottom

  • Privacy Policy

what is the importance of survey research design

  • Subscribe to our RSS Feed
  • Like us on Facebook
  • Follow us on Twitter
  • NCSBN Member Login Submit

Access provided by

Login to your account

If you don't remember your password, you can reset it by entering your email address and clicking the Reset Password button. You will then receive an email that contains a secure link for resetting your password

If the address matches a valid account an email will be sent to __email__ with instructions for resetting your password

what is the importance of survey research design

Download started.

  • Academic & Personal: 24 hour online access
  • Corporate R&D Professionals: 24 hour online access
  • Add To Online Library Powered By Mendeley
  • Add To My Reading List
  • Export Citation
  • Create Citation Alert

Survey Research: An Effective Design for Conducting Nursing Research

  • Vicki A. Keough, PhD, RN-BC, ACNP Vicki A. Keough Affiliations Dean and Professor at Loyola University Chicago, Marcella Niehoff School of Nursing, Maywood, Illinois Search for articles by this author
  • Paula Tanabe, PhD, MPH, RN Paula Tanabe Affiliations Research Assistant Professor in the Department of Emergency Medicine and the Institute for Healthcare Studies at Northwestern University, Feinberg School of Medicine, Chicago, Illinois Search for articles by this author

Purchase one-time access:

  • For academic or personal research use, select 'Academic and Personal'
  • For corporate R&D use, select 'Corporate R&D Professionals'
  • Clarke S.P.
  • Silber J.H.
  • Google Scholar
  • Blegen M.A.
  • Gearhart S.
  • Sehgal N.L.
  • Alldredge B.K.
  • Scopus (115)
  • Brommage D.
  • Full Text PDF
  • Dillman D.A.
  • Draugalis J.R.
  • Scopus (275)
  • Edwards P.J.
  • Clarke M.J.
  • DiGuiseppi C.
  • Grava-Gubins I.
  • Keating N.L.
  • Zaslavsky A.M.
  • Goldstein J.
  • Ayanian J.Z.
  • Scopus (48)
  • Kleinpell R.M.
  • McPherson L.
  • Leverene R.
  • The Prime Net Consortium
  • McCabe S.E.
  • Nelson T.F.
  • Weitzman E.R.
  • Scopus (83)
  • Thornlow D.
  • Nicotera N.
  • Scopus (56)
  • Paulhus D.L.
  • Scopus (10)
  • Ryan M.A.K.
  • Scopus (162)
  • McLean S.L.
  • Scopus (157)
  • U.S. Department of Health and Human Services Health Resources and Services Administration
  • • Describe the steps of the survey research project.
  • • Differentiate survey research methods.
  • a. social desirability
  • b. social status.
  • c. validated practice.
  • d. validated response.
  • a. Web-based
  • b. Face-to-face interviews
  • c. U.S. mail
  • a. They have the potential for researcher bias.
  • b. They are time consuming.
  • c. They reach too many participants.
  • d. They have the potential for subject bias.
  • a. A signed consent form from each participant is required.
  • b. Approval from an institutional review board is not needed.
  • c. Informed consent is implied when the survey is completed and returned.
  • d. Respondents cannot be asked for information that would identify them.
  • a. Purposive sample
  • b. Population study
  • c. Target survey
  • d. Subset sample
  • a. A questionnaire sent by registered mail
  • b. A questionnaire that is at least 10 pages long
  • c. Four contacts by mail followed by a "special" contact
  • d. The addition of a form letter to the questionnaire
  • a. outcome validity.
  • b. inter-rater validity.
  • c. face validity.
  • d. construct validity.
  • a. Outcome validity
  • b. Inter-rater validity
  • c. Face validity
  • d. Construct validity
  • a. inter-rater reliability.
  • b. intra-rater reliability.
  • c. concept validity.
  • d. database validity.
  • a. send the surveys out in waves.
  • b. send all surveys out at one time.
  • c. hold data entry until the end of data collection.
  • d. hold data cleaning until the end of data collection.
  • a. Statistical techniques should be independent of the design.
  • b. Statistical techniques should match the design.
  • c. Regression models should be used in the analysis.
  • d. Pattern testing should be used in the analysis.
  • c. Data analysis
  • d. Discussion
  • • Describe the steps of the survey research project. 1 2 3 4 5 ______________
  • • Differentiate survey research methods. 1 2 3 4 5 ______________
  • 2 Were the authors knowledgeable about the subject? 1 2 3 4 5 ______________
  • 3 Were the methods of presentation (text, tables, figures, etc.) effective? 1 2 3 4 5 ______________
  • 4 Was the content relevant to the objectives? 1 2 3 4 5 ______________
  • 5 Was the article useful to you in your work? 1 2 3 4 5 ______________
  • 6 Was there enough time allotted for this activity? 1 2 3 4 5 ______________ Comments: ______________ ______________ ______________ ______________ ______________ ______________
  • □ Member (no charge)
  • □ Nonmembers (must include a check for $15 payable to NCSBN)

Article info

Identification.

DOI: https://doi.org/10.1016/S2155-8256(15)30315-X

ScienceDirect

Related articles.

  • Download Hi-res image
  • Download .PPT
  • Access for Developing Countries
  • Articles & Issues
  • Current Issue
  • List of Issues
  • Supplements
  • For Authors
  • Guide for Authors
  • Author Services
  • Permissions
  • Researcher Academy
  • Submit a Manuscript
  • Journal Info
  • About the Journal
  • Contact Information
  • Editorial Board
  • New Content Alerts
  • Call for Papers
  • January & April 2022 Issues

The content on this site is intended for healthcare professionals.

  • Privacy Policy   
  • Terms and Conditions   
  • Accessibility   
  • Help & Contact

RELX

Survey descriptive research: Method, design, and examples

  • November 2, 2022

What is survey descriptive research?

The observational method: monitor people while they engage with a subject, the case study method: gain an in-depth understanding of a subject, survey descriptive research: easy and cost-effective, types of descriptive research design, what is the descriptive survey research design definition by authors, 1. quantitativeness and qualitatively, 2. uncontrolled variables, 3. natural environment, 4. provides a solid basis for further research, describe a group and define its characteristics, measure data trends by conducting descriptive marketing research, understand how customers perceive a brand, descriptive survey research design: how to make the best descriptive questionnaire, create descriptive surveys with surveyplanet.

Survey descriptive research is a quantitative method that focuses on describing the characteristics of a phenomenon rather than asking why it occurs. Doing this provides a better understanding of the nature of the subject at hand and creates a good foundation for further research.

Descriptive market research is one of the most commonly used ways of examining trends and changes in the market. It is easy, low-cost, and provides valuable in-depth information on a chosen subject.

This article will examine the basic principles of the descriptive survey study and show how to make the best descriptive survey questionnaire and how to conduct effective research.

It is often said to be quantitative research that focuses more on the what, how, when, and where instead of the why. But what does that actually mean?

The answer is simple. By conducting descriptive survey research, the nature of a phenomenon is focused upon without asking about what causes it.

The main goal of survey descriptive research is to shed light on the heart of the research problem and better understand it. The technique provides in-depth knowledge of what the research problem is before investigating why it exists.

Survey descriptive research and data collection methods

Descriptive research methods can differ based on data collection. We distinguish three main data collection methods: case study, observational method, and descriptive survey method.

Of these, the descriptive survey research method is most commonly used in fields such as market research, social research, psychology, politics, etc.

Sometimes also called the observational descriptive method, this is simply monitoring people while they engage with a particular subject. The aim is to examine people’s real-life behavior by maintaining a natural environment that does not change the respondents’ behavior—because they do not know they are being observed.

It is often used in fields such as market research, psychology, or social research. For example, customers can be monitored while dining at a restaurant or browsing through the products in a shop.

When doing case studies, researchers conduct thorough examinations of individuals or groups. The case study method is not used to collect general information on a particular subject. Instead, it provides an in-depth understanding of a particular subject and can give rise to interesting conclusions and new hypotheses.

The term case study can also refer to a sample group, which is a specific group of people that are examined and, afterward, findings are generalized to a larger group of people. However, this kind of generalization is rather risky because it is not always accurate.

Additionally, case studies cannot be used to determine cause and effect because of potential bias on the researcher’s part.

The survey descriptive research method consists of creating questionnaires or polls and distributing them to respondents, who then answer the questions (usually a mix of open-ended and closed-ended).

Surveys are the easiest and most cost-efficient way to gain feedback on a particular topic. They can be conducted online or offline, the size of the sample is highly flexible, and they can be distributed through many different channels.

When doing market research , use such surveys to understand the demographic of a certain market or population, better determine the target audience, keep track of the changes in the market, and learn about customer experience and satisfaction with products and services.

Several types of survey descriptive research are classified based on the approach used:

  • Descriptive surveys gather information about a certain subject.
  • Descriptive-normative surveys gather information just like a descriptive survey, after which results are compared with a norm.
  • Correlative surveys explore the relationship between two variables and conclude if it is positive, neutral, or negative.

A descriptive survey research design is a methodology used in social science and other fields to gather information and describe the characteristics, behaviors, or attitudes of a particular population or group of interest. While there may not be a single definition provided by specific authors, the concept is widely understood and defined similarly across the literature.

Here’s a general definition that captures the essence of a descriptive survey research design definition by authors:

A descriptive survey research design is a systematic and structured approach to collecting data from a sample of individuals or entities within a larger population, with the primary aim of providing a detailed and accurate description of the characteristics, behaviors, opinions, or attitudes that exist within the target group. This method involves the use of surveys, questionnaires, interviews, or observations to collect data, which is then analyzed and summarized to draw conclusions about the population of interest.

It’s important to note that descriptive survey research is often used when researchers want to gain insights into a population or phenomenon, but without manipulating variables or testing hypotheses, as is common in experimental research. Instead, it focuses on providing a comprehensive overview of the subject under investigation. Researchers often use various statistical and analytical techniques to summarize and interpret the collected data in descriptive survey research.

The characteristics and advantages of a descriptive survey questionnaire

There are numerous advantages to using a descriptive survey design. First of all, it is cheap and easy to conduct. A large sample can be surveyed and extensive data gathered quickly and inexpensively.

The data collected provides both quantitative and qualitative information , which provides a holistic understanding of the topic. Moreover, it can be used in further research on this or related topics.

Here are some of the most important advantages of conducting a survey descriptive research:

The descriptive survey research design uses both quantitative and qualitative research methods. It is used primarily to conduct quantitative research and gather data that is statistically easy to analyze. However, it can also provide qualitative data that helps describe and understand the research subject.

Descriptive research explores more than one variable. However, unlike experimental research, descriptive survey research design doesn’t allow control of variables. Instead, observational methods are used during research. Even though these variables can change and have an unexpected impact on an inquiry, they will give access to honest responses.

The descriptive research is conducted in a natural environment. This way, answers gathered from responses are more honest because the nature of the research does not influence them.

The data collected through descriptive research can be used to further explore the same or related subjects. Additionally, it can help develop the next line of research and the best method to use moving forward.

Descriptive survey example: When to use a descriptive research questionnaire?

Descriptive research design can be used for many purposes. It is mainly utilized to test a hypothesis, define the characteristics of a certain phenomenon, and examine the correlations between them.

Market research is one of the main fields in which descriptive methods are used to conduct studies. Here’s what can be done using this method:

Understanding the needs of customers and their desires is the key to a business’s success. By truly understanding these, it will be possible to offer exactly what customers need and prevent them from turning to competitors.

By using a descriptive survey, different customer characteristics—such as traits, opinions, or behavior patterns—can be determined. With this data, different customer types can be defined and profiles developed that focus on their interests and the behavior they exhibit. This information can be used to develop new products and services that will be successful.

Measuring data trends is extremely important. Explore the market and get valuable insights into how consumers’ interests change over time—as well as how the competition is performing in the marketplace.

Over time, the data gathered from a descriptive questionnaire can be subjected to statistical analysis. This will deliver valuable insights.

Another important aspect to consider is brand awareness. People need to know about your brand, and they need to have a positive opinion of it. The best way to discover their perception is to conduct a brand survey , which gives deeper insight into brand awareness, perception, identity, and customer loyalty .

When conducting survey descriptive research, there are a few basic steps that are needed for a survey to be successful:

  • Define the research goals.
  • Decide on the research method.
  • Define the sample population.
  • Design the questionnaire.
  • Write specific questions.
  • Distribute the questionnaire.
  • Analyze the data .
  • Make a survey report.

First of all, define the research goals. By setting up clear objectives, every other step can be worked through. This will result in the perfect descriptive questionnaire example and collect only valuable data.

Next, decide on the research method to use—in this case, the descriptive survey method. Then, define the sample population for (that is, the target audience). After that, think about the design itself and the questions that will be asked in the survey .

If you’re not sure where to start, we’ve got you covered. As free survey software, SurveyPlanet offers pre-made themes that are clean and eye-catching, as well as pre-made questions that will save you the trouble of making new ones.

Simply scroll through our library and choose a descriptive survey questionnaire sample that best suits your needs, though our user-friendly interface can help you create bespoke questions in a process that is easy and efficient.

With a survey in hand, it will then need to be delivered to the target audience. This is easy with our survey embedding feature, which allows for the linking of surveys on a website, via emails, or by sharing on social media.

When all the responses are gathered, it’s time to analyze them. Use SurveyPlanet to easily filter data and do cross-sectional analysis. Finally, just export the results and make a survey report.

Conducting descriptive survey research is the best way to gain a deeper knowledge of a topic of interest and develop a sound basis for further research. Sign up for a free SurveyPlanet account to start improving your business today!

Photo by Scott Graham on Unsplash

  • Privacy Policy

Buy Me a Coffee

Research Method

Home » Research Design – Types, Methods and Examples

Research Design – Types, Methods and Examples

Table of Contents

Research Design

Research Design

Definition:

Research design refers to the overall strategy or plan for conducting a research study. It outlines the methods and procedures that will be used to collect and analyze data, as well as the goals and objectives of the study. Research design is important because it guides the entire research process and ensures that the study is conducted in a systematic and rigorous manner.

Types of Research Design

Types of Research Design are as follows:

Descriptive Research Design

This type of research design is used to describe a phenomenon or situation. It involves collecting data through surveys, questionnaires, interviews, and observations. The aim of descriptive research is to provide an accurate and detailed portrayal of a particular group, event, or situation. It can be useful in identifying patterns, trends, and relationships in the data.

Correlational Research Design

Correlational research design is used to determine if there is a relationship between two or more variables. This type of research design involves collecting data from participants and analyzing the relationship between the variables using statistical methods. The aim of correlational research is to identify the strength and direction of the relationship between the variables.

Experimental Research Design

Experimental research design is used to investigate cause-and-effect relationships between variables. This type of research design involves manipulating one variable and measuring the effect on another variable. It usually involves randomly assigning participants to groups and manipulating an independent variable to determine its effect on a dependent variable. The aim of experimental research is to establish causality.

Quasi-experimental Research Design

Quasi-experimental research design is similar to experimental research design, but it lacks one or more of the features of a true experiment. For example, there may not be random assignment to groups or a control group. This type of research design is used when it is not feasible or ethical to conduct a true experiment.

Case Study Research Design

Case study research design is used to investigate a single case or a small number of cases in depth. It involves collecting data through various methods, such as interviews, observations, and document analysis. The aim of case study research is to provide an in-depth understanding of a particular case or situation.

Longitudinal Research Design

Longitudinal research design is used to study changes in a particular phenomenon over time. It involves collecting data at multiple time points and analyzing the changes that occur. The aim of longitudinal research is to provide insights into the development, growth, or decline of a particular phenomenon over time.

Structure of Research Design

The format of a research design typically includes the following sections:

  • Introduction : This section provides an overview of the research problem, the research questions, and the importance of the study. It also includes a brief literature review that summarizes previous research on the topic and identifies gaps in the existing knowledge.
  • Research Questions or Hypotheses: This section identifies the specific research questions or hypotheses that the study will address. These questions should be clear, specific, and testable.
  • Research Methods : This section describes the methods that will be used to collect and analyze data. It includes details about the study design, the sampling strategy, the data collection instruments, and the data analysis techniques.
  • Data Collection: This section describes how the data will be collected, including the sample size, data collection procedures, and any ethical considerations.
  • Data Analysis: This section describes how the data will be analyzed, including the statistical techniques that will be used to test the research questions or hypotheses.
  • Results : This section presents the findings of the study, including descriptive statistics and statistical tests.
  • Discussion and Conclusion : This section summarizes the key findings of the study, interprets the results, and discusses the implications of the findings. It also includes recommendations for future research.
  • References : This section lists the sources cited in the research design.

Example of Research Design

An Example of Research Design could be:

Research question: Does the use of social media affect the academic performance of high school students?

Research design:

  • Research approach : The research approach will be quantitative as it involves collecting numerical data to test the hypothesis.
  • Research design : The research design will be a quasi-experimental design, with a pretest-posttest control group design.
  • Sample : The sample will be 200 high school students from two schools, with 100 students in the experimental group and 100 students in the control group.
  • Data collection : The data will be collected through surveys administered to the students at the beginning and end of the academic year. The surveys will include questions about their social media usage and academic performance.
  • Data analysis : The data collected will be analyzed using statistical software. The mean scores of the experimental and control groups will be compared to determine whether there is a significant difference in academic performance between the two groups.
  • Limitations : The limitations of the study will be acknowledged, including the fact that social media usage can vary greatly among individuals, and the study only focuses on two schools, which may not be representative of the entire population.
  • Ethical considerations: Ethical considerations will be taken into account, such as obtaining informed consent from the participants and ensuring their anonymity and confidentiality.

How to Write Research Design

Writing a research design involves planning and outlining the methodology and approach that will be used to answer a research question or hypothesis. Here are some steps to help you write a research design:

  • Define the research question or hypothesis : Before beginning your research design, you should clearly define your research question or hypothesis. This will guide your research design and help you select appropriate methods.
  • Select a research design: There are many different research designs to choose from, including experimental, survey, case study, and qualitative designs. Choose a design that best fits your research question and objectives.
  • Develop a sampling plan : If your research involves collecting data from a sample, you will need to develop a sampling plan. This should outline how you will select participants and how many participants you will include.
  • Define variables: Clearly define the variables you will be measuring or manipulating in your study. This will help ensure that your results are meaningful and relevant to your research question.
  • Choose data collection methods : Decide on the data collection methods you will use to gather information. This may include surveys, interviews, observations, experiments, or secondary data sources.
  • Create a data analysis plan: Develop a plan for analyzing your data, including the statistical or qualitative techniques you will use.
  • Consider ethical concerns : Finally, be sure to consider any ethical concerns related to your research, such as participant confidentiality or potential harm.

When to Write Research Design

Research design should be written before conducting any research study. It is an important planning phase that outlines the research methodology, data collection methods, and data analysis techniques that will be used to investigate a research question or problem. The research design helps to ensure that the research is conducted in a systematic and logical manner, and that the data collected is relevant and reliable.

Ideally, the research design should be developed as early as possible in the research process, before any data is collected. This allows the researcher to carefully consider the research question, identify the most appropriate research methodology, and plan the data collection and analysis procedures in advance. By doing so, the research can be conducted in a more efficient and effective manner, and the results are more likely to be valid and reliable.

Purpose of Research Design

The purpose of research design is to plan and structure a research study in a way that enables the researcher to achieve the desired research goals with accuracy, validity, and reliability. Research design is the blueprint or the framework for conducting a study that outlines the methods, procedures, techniques, and tools for data collection and analysis.

Some of the key purposes of research design include:

  • Providing a clear and concise plan of action for the research study.
  • Ensuring that the research is conducted ethically and with rigor.
  • Maximizing the accuracy and reliability of the research findings.
  • Minimizing the possibility of errors, biases, or confounding variables.
  • Ensuring that the research is feasible, practical, and cost-effective.
  • Determining the appropriate research methodology to answer the research question(s).
  • Identifying the sample size, sampling method, and data collection techniques.
  • Determining the data analysis method and statistical tests to be used.
  • Facilitating the replication of the study by other researchers.
  • Enhancing the validity and generalizability of the research findings.

Applications of Research Design

There are numerous applications of research design in various fields, some of which are:

  • Social sciences: In fields such as psychology, sociology, and anthropology, research design is used to investigate human behavior and social phenomena. Researchers use various research designs, such as experimental, quasi-experimental, and correlational designs, to study different aspects of social behavior.
  • Education : Research design is essential in the field of education to investigate the effectiveness of different teaching methods and learning strategies. Researchers use various designs such as experimental, quasi-experimental, and case study designs to understand how students learn and how to improve teaching practices.
  • Health sciences : In the health sciences, research design is used to investigate the causes, prevention, and treatment of diseases. Researchers use various designs, such as randomized controlled trials, cohort studies, and case-control studies, to study different aspects of health and healthcare.
  • Business : Research design is used in the field of business to investigate consumer behavior, marketing strategies, and the impact of different business practices. Researchers use various designs, such as survey research, experimental research, and case studies, to study different aspects of the business world.
  • Engineering : In the field of engineering, research design is used to investigate the development and implementation of new technologies. Researchers use various designs, such as experimental research and case studies, to study the effectiveness of new technologies and to identify areas for improvement.

Advantages of Research Design

Here are some advantages of research design:

  • Systematic and organized approach : A well-designed research plan ensures that the research is conducted in a systematic and organized manner, which makes it easier to manage and analyze the data.
  • Clear objectives: The research design helps to clarify the objectives of the study, which makes it easier to identify the variables that need to be measured, and the methods that need to be used to collect and analyze data.
  • Minimizes bias: A well-designed research plan minimizes the chances of bias, by ensuring that the data is collected and analyzed objectively, and that the results are not influenced by the researcher’s personal biases or preferences.
  • Efficient use of resources: A well-designed research plan helps to ensure that the resources (time, money, and personnel) are used efficiently and effectively, by focusing on the most important variables and methods.
  • Replicability: A well-designed research plan makes it easier for other researchers to replicate the study, which enhances the credibility and reliability of the findings.
  • Validity: A well-designed research plan helps to ensure that the findings are valid, by ensuring that the methods used to collect and analyze data are appropriate for the research question.
  • Generalizability : A well-designed research plan helps to ensure that the findings can be generalized to other populations, settings, or situations, which increases the external validity of the study.

Research Design Vs Research Methodology

About the author.

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Thesis Outline

Thesis Outline – Example, Template and Writing...

Research Paper Conclusion

Research Paper Conclusion – Writing Guide and...

Appendices

Appendices – Writing Guide, Types and Examples

Research Paper Citation

How to Cite Research Paper – All Formats and...

Research Report

Research Report – Example, Writing Guide and...

Delimitations

Delimitations in Research – Types, Examples and...

Leave a comment x.

Save my name, email, and website in this browser for the next time I comment.

Survey research

Affiliation.

  • 1 Ann Arbor, Mich. From the Section of Plastic Surgery, Department of Surgery, The University of Michigan Medical Center; and Division of General Medicine, Department of Internal Medicine, University of Michigan.
  • PMID: 20885261
  • DOI: 10.1097/PRS.0b013e3181ea44f9

Survey research is a unique methodology that can provide insight into individuals' perspectives and experiences and can be collected on a large population-based sample. Specifically, in plastic surgery, survey research can provide patients and providers with accurate and reproducible information to assist with medical decision-making. When using survey methods in research, researchers should develop a conceptual model that explains the relationships of the independent and dependent variables. The items of the survey are of primary importance. Collected data are only useful if they accurately measure the concepts of interest. In addition, administration of the survey must follow basic principles to ensure an adequate response rate and representation of the intended target sample. In this article, the authors review some general concepts important for successful survey research and discuss the many advantages this methodology has for obtaining limitless amounts of valuable information.

Publication types

  • Research Support, Non-U.S. Gov't
  • Data Collection / methods*
  • Quality Control
  • Reproducibility of Results
  • Research Design
  • Surgery, Plastic*
  • Surveys and Questionnaires*
  • United States
  • Open access
  • Published: 13 March 2024

Importance of residency applicant factors based on specialty and demographics: a national survey of program directors

  • Sarah A. Strausser 1 ,
  • Kelly M. Dopke 1 ,
  • Destin Groff 1 ,
  • Sue Boehmer 2 &
  • Robert P. Olympia 3  

BMC Medical Education volume  24 , Article number:  275 ( 2024 ) Cite this article

306 Accesses

1 Altmetric

Metrics details

With the transition away from traditional numerical grades/scores, residency applicant factors such as service, research, leadership, and extra-curricular activities may become more critical in the application process.

To assess the importance of residency application factors reported by program directors (PDs), stratified by director demographics and specialty.

A questionnaire was electronically distributed to 4241 residency PDs in 23 specialties during spring 2022 and included questions on PD demographics and 22 residency applicant factors, including demographics, academic history, research involvement, and extracurricular activities. Responses were measured using a Likert scale for importance. Descriptive statistics and Chi-square and Fisher exact test analysis were performed.

767 questionnaires were completed (19% response rate). Across all specialties, the factor considered most important was the interview (99.5%). When stratified by specialty, surgical PDs were more likely to characterize class rank, letters of recommendation, research, presenting scholarly work, and involvement in collegiate sports as extremely important/very important (all p  < 0.0001). In contrast, primary care PDs favored the proximity of the candidate’s hometown ( p  = 0.0002) and community service ( p  = 0.03). Mean importance of applicant factors also differed by PD age, gender, and ethnicity.

We have identified several residency application factors considered important by PDs, stratified by their specialty, demographics, and previous experiences. With the transition away from numerical grades/scores, medical students should be aware of the factors PDs consider important based on their chosen specialty. Our analysis may assist medical students in understanding the application and match process across various specialties.

Peer Review reports

Introduction

Several changes in medical school performance assessments have occurred over the past several years. The United States Medical Licensing Exam (USMLE) Step 1 transitioned from a numeric score to a pass/fail outcome on January 26, 2022 [ 1 ]. The score on this exam was previously considered one of the most important factors for choosing which residency applicants to interview, especially in applicants applying to competitive specialties [ 2 ]. Many medical school curriculums are graded as pass/fail during the preclinical years; thus, preclinical grades cannot be uniformly used to assess applicants. During the COVID-19 pandemic, numerous medical schools also changed their clinical year grading systems to pass/fail [ 3 ]. Given limited quantifiable data, these recent pass/fail reporting changes raise questions about how residency applicants should be objectively assessed.

The National Resident Matching Program (NRMP) Program Director Survey provides a wealth of information regarding factors that program directors value in selecting applicants to interview and ranking applicants in the Match. This survey is sent to residency program directors (PDs) of 23 specialties biennially, with the most recent report published in 2022 [ 4 ]. In 2021 and 2022, several items were deleted from the NRMP program director survey to allow space for more questions about holistic review and the virtual residency application experience during the pandemic, while decreasing respondents’ burden. Deleted items included questions about factors considered in decisions about which applicants to interview and rank and ratings of the importance of each factor. This survey compared the frequency with which programs interviewed and ranked specific applicant groups, including US MD Senior, US DO Senior, US MD Graduate, US DO Graduate, US IMG, and Non-US IMG. The survey also delved into the elements encompassed within the programs’ holistic review process and their significance. These factors include test scores, personal attributes, interests, interpersonal skills, ethics and professionalism, personal experiences, and geographic preferences.

While the NRMP survey results provide data regarding the mean importance of these holistic review factors, each of these factors is quite broad and does not characterize what specific attributes, interests, and experiences are of interest to program directors. In 2022, the NRMP stated that future iterations of the program director survey will re-introduce the questions about factors considered in decisions about which applicants to interview and rank. However, we felt it prudent to assess current opinions regarding which applicant factors are most important to program directors as programs continue to rely on a holistic review of applications due to decreasing objective data such as Step 1 scores.

Recent studies have surveyed program directors after the change to a pass/fail Step 1 score. One study found that approximately 40% of program directors replied that meaningful research participation would become more critical when choosing whom to offer interviews and that competitiveness of the specialty correlates with the reported importance of research [ 5 ]. Another study found that compared to non-procedural specialties, PDs of procedural specialties stated that they would place more emphasis on USMLE examinations such as Step 2 after transitioning to a pass/fail Step 1 [ 6 ]. While these studies, along with the NRMP data, provide substantial information regarding applicant factors that program directors emphasize when choosing whom to interview, the objective of our study was to analyze additional factors that may be emphasized in a holistic approach to applicant evaluation, such as athletic experience, military experience, medical scribe experience, and previous career prior to medicine. We evaluated the influence of athletic history on residency ranking given reported findings from other studies suggesting that prior participation in athletics may predict success in medical school and residency [ 7 , 8 , 9 ]. Medical scribe experience has also been postulated to be associated with academic success in medical school [ 10 ]. Military experience is associated with an increased likelihood of selecting a candidate for medical school interviews [ 11 ]. Secondarily, we assessed the importance of residency application factors stratified by PD demographics and specialty.

Following Institutional Review Board approval at Penn State College of Medicine, a questionnaire was electronically distributed to 4241 residency PDs within 23 specialties during Spring 2022. PD contact information was collected for every specialty through the FREIDA (Fellowship and Residency Electronic Interactive Database Access) website. All PDs provided informed consent when they agreed to participate in the survey. The survey included seven questions on PD demographics and a list of 22 residency applicant factors with an associated Likert scale rating system where program directors would rank the importance of each factor, as shown in Appendix 1. This survey was developed by the authors, with no collection of validity evidence. Three reminder emails were sent at one-week intervals. Responses were stratified by specialty [surgical (general surgery, neurological surgery, obstetrics and gynecology, orthopaedic surgery, urology, otolaryngology, plastic surgery, and vascular surgery), primary care (family medicine, internal medicine, pediatrics, medicine-pediatrics), all other (anesthesiology, child neurology, dermatology, diagnostic radiology/nuclear medicine, emergency medicine, interventional radiology, neurology, pathology, physical medicine and rehabilitation, radiation oncology, and psychiatry)] and demographics of the PD (age, sex, ethnicity, previous extra-curricular experiences). Extremes on each end of the Likert scale were combined for analyses as the differences between “not at all important” and “slightly important” are unlikely to affect decision-making processes significantly. The same applies to the difference between “very important” and “extremely important.”

Percentages and 95% Confidence Intervals were calculated for categorical variables. Differences between groups for categorical variables were characterized using contingency table analysis; significance levels were determined by Pearson’s chi-square statistic and Fisher’s Exact Test. P-values less than 0.05 were considered significant. The statistical analysis was performed using SAS software, version 9.4.

Analysis was performed on 767 completed questionnaires (19% usable response rate). Most respondents were male (55.7%) and Caucasian (78.6%). 38.0% of respondents were between ages 41 and 50. (Table  1 ). Specialties of respondents included 189 surgical (25%), 253 primary care (33%), and 323 other (42%) (Fig.  1 ). The survey template is shown in Appendix 1. Across all specialties, the percentage of respondents who characterized the following factors as extremely important/very important were: interview (99.5%), passing USMLE examinations (88.2%), core clerkship grades (79.1%), demonstrating leadership (70%), letters of recommendation (69.4%), personal statement (64.2%), dean’s letter (49.4%), community service (40%), class rank (29.8%), specialty-specific research (19.3%), close proximity of candidate’s hometown (15.7%), research publications (15.1%), presenting their research at a scientific assembly (10.4%), ethnicity of candidate (9%), non-specialty-specific research (7.2%), previous involvement in collegiate sports (6.9%), previous career prior to medicine (5.5%), previous military experience (4.8%), previous involvement in global health (4.2%), sex of candidate (1.9%), previous involvement as a scribe (1.6%), and age of the applicant (1.4%) (Table  2 ). P-values less than 0.05 are bolded in Tables 2, 3, and 4 to distinguish significant findings.

figure 1

Respondents by specialty type (Primary Care, Surgical, Other)

When stratified by specialty, surgical PDs were more likely to characterize the following factors as extremely important/very important: class rank ( p  < 0.0001), letters of recommendation ( p  < 0.0001), specialty-specific research ( p  < 0.0001), non-specialty-specific research ( p  < 0.0001), presenting their research at a scientific assembly ( p  < 0.0001), and previous involvement in collegiate sports ( p  < 0.0001). Primary care PDs were more likely to characterize the following factors as extremely important/very important: close proximity to the candidate’s hometown ( p  = 0.0002) and community service ( p  = 0.03) (Table  2 ). Data regarding the importance of residency applicant factors stratified by each specialty grouped within “all others” is displayed in the Supplemental File without p-values due to small sample sizes.

When stratified by demographics, PDs  ≤  50 years old were more likely than PDs ≥ 50 years old to characterize ethnicity ( p  = 0.04) and letters of recommendation ( p  = 0.03) as extremely important/very important. Male PDs were more likely to characterize class rank ( p  = 0.0007) as extremely important/very important. In contrast, female PDs were more likely to characterize ethnicity ( p  = 0.008) and community service ( p  = 0.005) as extremely important/very important. Non-Caucasian PDs were more likely to characterize proximity of hometown ( p  = 0.007), dean’s letter ( p  = 0.0003), letters of recommendation ( p  = 0.0003), involvement in global health ( p  = 0.001), and specialty-specific research ( p  = 0.0006) as extremely important/very important. PDs with previous collegiate sports experience were more likely to characterize leadership ( p  = 0.004) and previous involvement in collegiate athletics ( p  = 0.003) as extremely important/very important compared with PDs without athletics experience. PDs with prior global health experience were more likely to characterize previous involvement in global health ( p  = 0.009) and community service involvement ( p  = 0.002) as extremely important/very important compared to PDs without global health experience (Tables  3 and 4 ).

This survey analysis discovered numerous specific residency application factors considered important by PDs, stratified by their specialty, demographics, and previous experience.

Although several survey questions were similar to questions from the 2022 NRMP Program Director survey, our survey explores several applicant factors and their mean importance ratings that were asked in the 2020 NRMP Program Director Survey but not included in the 2022 NRMP Program Director Survey after Step 1 transitioned to pass/fail scoring. First, our survey uniquely stratifies responses by respondent demographics, allowing us to investigate how program director characteristics influence residency applicant ranking. Second, our survey uniquely expands on several residency applicant factors and supplements the 2022 NRMP survey as these aforementioned factors were removed to decrease respondent burden and focus on virtual aspects of residency applications during the pandemic, such as shifting to virtual recruitment and interviewing. Third, our survey focused on the importance of specific applicant experiences in residency ranking, including military experience, involvement in collegiate sports, experience as a scribe, and involvement in global health. Finally, our survey expanded on the importance of applicant demographics in residency ranking including age of candidate, sex of candidate, ethnicity of candidate, and proximity of candidate’s hometown to your program.

Several factors from the 2020 NRMP Program Director Survey overlapped with factors included in our survey. Across all specialties, factors in the NRMP survey were assessed for their importance in ranking applicants. Overlapping factors from the 2020 survey included several components of the interview, such as interpersonal skills (percent citing factor, average rating on Likert Scale with 1 being not at all important, 5 being very important) (95%, 4.8), interactions with faculty (89%, 4.8), and interactions with housestaff (89%, 4.8), letters of recommendation (70%, 4.1), leadership qualities (60%, 4.3), Dean’s Letter (58%, 4.0), personal statement (54%, 3.8), other life experience (47%, 4.0), any failed attempt in USMLE (46%, 4.4), grades in required clerkships (42%, 4.0), volunteer/extracurricular experiences (35%, 3.9) and demonstrated involvement and interest in research (28%, 3.8) [ 2 ]. Our results revealed similar emphasis on the importance of the interview, letters of recommendation, leadership qualities, Dean’ Letter, and the personal statement. Notably, in our survey conducted after the transition of Step1 scoring to pass/fail, 88.2% of respondents characterized passing USMLE examinations as extremely important/very important compared to 46% of respondents from the 2020 NRMP Program Director survey who cited any failed attempt in USMLE as an important factor in ranking applicants with an average rating of 4.4 on a Likert Scale of 1–5. This suggests that the importance of passing Step 1 on the first attempt has increased in importance since the transition to pass/fail scoring.

Our results also complement the 2022 NRMP Program Director survey. Program directors cited the following broad categories as important in holistic review of applicants: applicant personal attributes (88%, 4.4) (percent citing factor, average rating on Likert Scale with 1 being not at all important, 5 being very important), applicant interests (85%, 4.1), applicant interpersonal skills, ethics, and professionalism (81%, 4.6), and applicant personal experiences (81%, 3.9) [ 4 ].

Understanding our survey results may allow medical students to be better equipped in their decision-making regarding prospective residency programs in the medical specialty of their choice, knowing both the program director’s demographics and the subspecialty to which they are applying.

The decision to make the USMLE Step 1 exam pass/fail will have implications for both medical students and residency programs during the application process [ 6 ]. With this transition, in addition to many schools adopting the pass/fail curriculum for the didactic and clerkship years, students may find it harder to determine their competitiveness in a given specialty when relying solely on subjective feedback [ 12 , 13 ]. In addition, without a 3-digit-score cutoff or student decision-making not to apply to specific programs due to a particular 3-digit score, the increasing number of applications each program receives will directly conflict with the holistic application screening because of the burden placed on educators to review many more applications. This may lead to using other objective measures, such as the USMLE Step 2, to become the screening tool of choice [ 14 ].

Given the limited amount of objective data, such as USMLE test scores on residency applications, it is essential for students to understand a residency program director’s perspective in the residency application process due to this consequential change. Unsurprisingly, program directors across all specialties emphasized application components, including interviews, passing USMLE examinations, core clerkship grades, demonstrating leadership, and letters of recommendation.

Our results indicate that PDs for surgical specialties will emphasize class rank, letters of recommendation, research, and previous involvement in collegiate sports. This is consistent with other studies analyzing factors that have an implication in the residency rank process [ 6 , 15 , 16 , 17 ]. Thus, prospective candidates may glean insight into trends in applicants at specific programs by understanding the demographics of the program director of that residency program. The differences in applicant preference based on the demographics of PDs, including their age, sex, ethnicity, and athletic history, suggest implicit biases may influence the decision-making process for residency applicants. However, we feel that these data should not be used to guide students towards or away from certain programs based solely on PD demographics, but rather students should be aware that both implicit/explicit bias may affect applicant ranking during the residency application process. Additional studies are warranted to elucidate this potential relationship further.

There were several limitations to this study. First, we obtained program directors’ emails via FREIDA and found that many emails for program directors were not current, as we received undeliverable emails and responses that some PDs had left their positions. Another limitation is that our survey focused on USMLE scores rather than Comprehensive Osteopathic Medical Licensing Examination (COMLEX) scores, which osteopathic medical schools use. Future studies may include more emphasis on the role that COMLEX scores will play in residency applicant ranking.

There are several additional survey questions that we did not include but would supplement the findings of this study. For example, collecting demographic information about the size and rank/competitive nature of programs would help applicants better understand how program size and reputation affect the perceived importance of applicant factors. Our survey asked about the importance of factors on a Likert scale of 1–5. However, future iterations of surveys could include assessing if the presence or absence of factors, such as failure of a Step 1 exam, are critical in ranking applicants (i.e., a program may not rank an applicant with any USMLE failures). Another limitation is the omission of a free response text box for PDs to include additional comments of important aspects of candidate selection that were not included in our survey. We suggest a free response section be included in future PD surveys.

Our response rate of 19% is comparable to the 18% response rate for the 2020 NRMP Program Director survey, but lower than the 33.1% response rate for the 2022 NRMP Program Director survey. Although the data and generalizability of the results may be limited by the low response rate and sampling bias of those who choose to participate in the survey, we hope that our analysis of the importance of residency application factors assists medical students in understanding the application and match process across various specialties. Furthermore, we hope our data provides insight into possible implicit biases across specialties and how that may be implicated in the match process. Future research should involve determining whether medical students who excel or participate in these identified factors lead to success during the application and match process.

Data availability

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Examination Results and Scoring. United States Medical Licensing Examination. https://www.usmle.org/scores-transcripts/examination-results-and-scoring . Accessed 27 Feb 2024.

Results of the 2020 NRMP Program Director Survey. National Resident Matching Program. https://www.nrmp.org/wp-content/uploads/2022/01/2020-PD-Survey.pdf (2020). Accessed 27 Feb 2024.

Curriculum Reports. American Association of Medical Colleges. https://www.aamc.org/data-reports/curriculum-reports/report/curriculum-reports . Accessed 27 Feb 2024.

Results of the 2022 NRMP Program Director Survey. National Resident Matching Program. https://www.nrmp.org/wp-content/uploads/2022/09/PD-Survey-Report-2022_FINALrev.pdf (2022). Accessed 27 Feb 2024.

Wolfson RK, Fairchild PC, Bahner I, Baxa DM, Birnbaum DR, Chaudhry SI, et al. Residency Program directors’ views on Research Conducted during Medical School: A National Survey. Acad Med J Assoc Am Med Coll. 2023;98(10):1185–95.

Article   Google Scholar  

Patel OU, Haynes WK, Burge KG, Yadav IS, Peterson T, Camino A, et al. Results of a National Survey of Program directors’ perspectives on a Pass/Fail US Medical Licensing Examination Step 1. JAMA Netw Open. 2022;5(6):e2219212.

Article   PubMed   PubMed Central   Google Scholar  

Anderson KG, Lemos J, Pickell S, Stave C, Sgroi M. Athletes in medicine: a systematic review of performance of athletes in medicine. Med Educ. 2023;57(9):807–19.

Article   PubMed   Google Scholar  

Shaffrey EC, Edalatpour A, Nicksic PJ, Nkana ZH, Michelotti BF, Afifi AM. Rise above the competition: how do plastic surgery Residency applicants with NCAA Experience Fare in the Residency Match? Aesthetic Plast Surg. 2023.

Chole RA, Ogden MA. Predictors of future success in otolaryngology residency applicants. Arch Otolaryngol Head Neck Surg. 2012;138(8):707–12.

Shah R, Johnstone C, Rappaport D, Bilello LA, Adamas-Rappaport W. Pre-matriculation clinical experience positively correlates with step 1 and step 2 scores. Adv Med Educ Pract. 2018;9:707–11.

Dong T, Hutchinson J, Torre D, Durning SJ, Artino AR, Schreiber-Gregory D, et al. What influences the decision to Interview a Candidate for Medical School? Mil Med. 2020;185(11–12):e1999–2003.

Ehrlich H, Sutherland M, McKenney M, Elkbuli A. Implications of the United States Medical Licensing Examination Step 1 examination transition to Pass/Fail on Medical Students Education and Future Career opportunities. Am Surg. 2021;87(8):1196–202.

Dietrick JA, Weaver MT, Merrick HW. Pass/fail grading: a disadvantage for students applying for residency. Am J Surg. 1991;162(1):63–6.

Article   CAS   PubMed   Google Scholar  

Ozair A, Bhat V, Detchou DKE. The US Residency selection process after the United States Medical Licensing Examination Step 1 Pass/Fail change: overview for applicants and educators. JMIR Med Educ. 2023;9:e37069.

Wang A, Karunungan KL, Story JD, Ha EL, Braddock CH. Residency Program Director perspectives on changes to US Medical Licensing Examination. JAMA Netw Open. 2021;4(10):e2129557.

Chisholm LP, Drolet BC. USMLE Step 1 scoring changes and the urology residency application process: program directors’ perspectives. Urology. 2020;145:79–82.

Fan RR, Aziz F, Wittgen CM, Williams MS, Smeds MR. A survey of vascular surgery program directors: perspectives following USMLE Step 1 Conversion to Pass/Fail and virtual only interviews. Ann Vasc Surg. 2023;88:32–41.

Download references

Not applicable.

Author information

Authors and affiliations.

Penn State College of Medicine, 17033, Hershey, PA, USA

Sarah A. Strausser, Kelly M. Dopke & Destin Groff

Department of Public Health Services, Division of Biostatistics, Penn State Hershey Medical Center, Hershey, PA, USA

Sue Boehmer

Department of Emergency Medicine, Penn State Milton S. Hershey Medical Center, 17033, Hershey, PA, USA

Robert P. Olympia

You can also search for this author in PubMed   Google Scholar

Contributions

All authors contributed to the conception and design of the work. All authors edited and reviewed the manuscript.S.A.S., K.M.D., and D.G. wrote the main manuscript text, collected data, and edited tables. S.A.S. prepared Fig. 1. S.B. performed statistical analysis, generated the tables, and wrote the statistical analysis section of the manuscript.R.P.O. oversaw project administration from conception of the project to drafting of the manuscript.

Corresponding author

Correspondence to Sarah A. Strausser .

Ethics declarations

Ethics approval and consent to participate.

Ethical approval has been granted by the Penn State College of Medicine Institutional Review Board, on 5/13/22, IRB#19767. Prior to starting the survey, all participants received a Summary Explanation of Research explaining that the completion of the questionnaire implies voluntary consent to participate in the research.

Consent for publication

Previous presentations.

AAMC Learn Serve Lead Conference, Seattle, November 2023.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Supplementary material 2, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Strausser, S.A., Dopke, K.M., Groff, D. et al. Importance of residency applicant factors based on specialty and demographics: a national survey of program directors. BMC Med Educ 24 , 275 (2024). https://doi.org/10.1186/s12909-024-05267-8

Download citation

Received : 09 January 2024

Accepted : 06 March 2024

Published : 13 March 2024

DOI : https://doi.org/10.1186/s12909-024-05267-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Medical education
  • Application

BMC Medical Education

ISSN: 1472-6920

what is the importance of survey research design

Read our research on: TikTok | Podcasts | Election 2024

Regions & Countries

8 in 10 americans say religion is losing influence in public life, few see biden or trump as especially religious.

Pew Research Center conducted this survey to explore Americans’ attitudes about religion’s role in public life, including politics in a presidential election year.

For this report, we surveyed 12,693 respondents from Feb. 13 to 25, 2024. Most of the respondents (10,642) are members of the American Trends Panel, an online survey panel recruited through national random sampling of residential addresses, which gives nearly all U.S. adults a chance of selection.

The remaining respondents (2,051) are members of three other panels, the Ipsos KnowledgePanel, the NORC Amerispeak panel and the SSRS opinion panel. All three are national survey panels recruited through random sampling (not “opt-in” polls). We used these additional panels to ensure that the survey would have enough Jewish and Muslim respondents to be able to report on their views.

The survey is weighted to be representative of the U.S. adult population by gender, race, ethnicity, partisan affiliation, education, religious affiliation and other categories.

For more, refer to the ATP’s Methodology and the Methodology for this report. Read the questions used in this report .

Chart shows the share of Americans who say religion’s influence is declining is as high as it’s ever been

A new Pew Research Center survey finds that 80% of U.S. adults say religion’s role in American life is shrinking – a percentage that’s as high as it’s ever been in our surveys.

Most Americans who say religion’s influence is shrinking are not happy about it. Overall, 49% of U.S. adults say both that religion is losing influence and that this is a bad thing. An additional 8% of U.S. adults think religion’s influence is growing and that this is a good thing.

Together, a combined 57% of U.S adults – a clear majority – express a positive view of religion’s influence on American life.

Chart shows 49% of Americans say religion’s influence is declining and that this is a bad thing

The survey also finds that about half of U.S. adults say it’s “very” or “somewhat” important to them to have a president who has strong religious beliefs, even if those beliefs are different from their own. But relatively few Americans view either of the leading presidential candidates as very religious: 13% of Americans say they think President Joe Biden is very religious, and just 4% say this about former President Donald Trump.

Overall, there are widespread signs of unease with religion’s trajectory in American life. This dissatisfaction is not just among religious Americans. Rather, many religious and nonreligious Americans say they feel that their religious beliefs put them at odds with mainstream culture, with the people around them and with the other side of the political spectrum. For example:

Chart shows a growing share of Americans feel their religious views are at odds with the mainstream

  • 48% of U.S. adults say there’s “a great deal” of or “some” conflict between their religious beliefs and mainstream American culture, up from 42% in 2020.
  • 29% say they think of themselves as religious minorities, up from 24% in 2020.
  • 41% say it’s best to avoid discussing religion at all if someone disagrees with you, up from 33% in 2019.
  • 72% of religiously unaffiliated adults – those who identify, religiously, as atheist, agnostic or “nothing in particular” – say conservative Christians have gone too far in trying to control religion in the government and public schools; 63% of Christians say the same about secular liberals.

These are among the key findings of a new Pew Research Center survey, conducted Feb. 13-25, 2024, among a nationally representative sample of 12,693 U.S. adults.

This report examines:

  • Religion’s role in public life
  • U.S. presidential candidates and their religious engagement
  • Christianity’s place in politics, and “Christian nationalism”

The survey also finds wide partisan gaps on questions about the proper role for religion in society, with Republicans more likely than Democrats to favor religious influence in governance and public life. For instance:

  • 42% of Republicans and Republican-leaning independents say that when the Bible and the will of the people conflict, the Bible should have more influence on U.S. laws than the will of the people. Just 16% of Democrats and Democratic-leaning independents say this.
  • 21% of Republicans and GOP leaners say the federal government should declare Christianity the official religion of the United States, compared with 7% of Democrats and Democratic leaners.

Moral and religious qualities in a president

Almost all Americans (94%) say it is “very” or “somewhat” important to have a president who personally lives a moral and ethical life. And a majority (64%) say it’s important to have a president who stands up for people with their religious beliefs.

About half of U.S. adults (48%) say it is important for the president to hold strong religious beliefs. Fewer (37%) say it’s important for the president to have the same religious beliefs as their own.

Republicans are much more likely than Democrats to value religious qualities in a president, and Christians are more likely than the religiously unaffiliated to do so. For example:

  • Republicans and GOP leaners are twice as likely as Democrats and Democratic leaners to say it is important to have a president who has the same religious beliefs they do (51% vs. 25%).
  • 70% of White evangelical Protestants say it is important to have a president who shares their religious beliefs. Just 11% of religiously unaffiliated Americans say this.

Chart shows Nearly all U.S. adults say it is important to have a president who personally lives a moral, ethical life

Views of Biden, Trump and their religious engagement

Relatively few Americans think of Biden or Trump as “very” religious. Indeed, even most Republicans don’t think Trump is very religious, and even most Democrats don’t think Biden is very religious.

  • 6% of Republicans and GOP leaners say Trump is very religious, while 44% say he is “somewhat” religious. Nearly half (48%) say he is “not too” or “not at all” religious.
  • 23% of Democrats and Democratic-leaning independents say Biden is very religious, while 55% say he is somewhat religious. And 21% say he is not too or not at all religious.

Chart shows Few Americans see Biden, Trump as very religious

Though they don’t think Trump is very religious himself, most Republicans and people in religious groups that tend to favor the Republican Party do think he stands up at least to some extent for people with their religious beliefs. Two-thirds of Republicans and independents who lean toward the GOP (67%) say Trump stands up for people with their religious beliefs “a great deal,” “quite a bit” or “some.” About the same share of White evangelical Protestants (69%) say this about Trump.

Similarly, 60% of Democrats and Democratic-leaning independents, as well as 73% of Jewish Americans and 60% of Black Protestants, say Biden stands up for people with their religious beliefs a great deal, quite a bit or some.

Chart shows About 7 in 10 White evangelical Protestants say Trump stands up for people with their religious beliefs at least to ‘some’ extent

Overall, views of both Trump and Biden are generally unfavorable.

  • White evangelical Protestants – a largely Republican group – stand out as having particularly favorable views of Trump (67%) and unfavorable views of Biden (86%).
  • Black Protestants and Jewish Americans – largely Democratic groups – stand out for having favorable views of Biden and unfavorable views of Trump.

Chart shows Views of Biden and Trump are divided along religious and partisan lines

Views on trying to control religious values in the government and schools

Americans are almost equally split on whether conservative Christians have gone too far in trying to push their religious values in the government and public schools, as well as on whether secular liberals have gone too far in trying to keep religious values out of these institutions.

Most religiously unaffiliated Americans (72%) and Democrats (72%) say conservative Christians have gone too far. And most Christians (63%) and Republicans (76%) say secular liberals have gone too far.

Chart shows Many Americans think conservative Christians, secular liberals have gone too far in trying to control religion in government and public schools

Christianity’s place in politics, and Christian nationalism

In recent years, “Christian nationalism” has received a great deal of attention as an ideology that some critics have said could threaten American democracy .

Table shows Americans’ views of Christian nationalism have been stable since 2022

Despite growing news coverage of Christian nationalism – including reports of political leaders who seem to endorse the concept – the new survey shows that there has been no change in the share of Americans who have heard of Christian nationalism over the past year and a half. Similarly, the new survey finds no change in how favorably U.S. adults view Christian nationalism.

Overall, 45% say they have heard or read about Christian nationalism, including 25% who also have an unfavorable view of it and 5% who have a favorable view of it. Meanwhile, 54% of Americans say they haven’t heard of Christian nationalism at all.

One element often associated with Christian nationalism is the idea that church and state should not be separated, despite the Establishment Clause in the First Amendment to the U.S. Constitution.

The survey finds that about half of Americans (49%) say the Bible should have “a great deal” of or “some” influence on U.S. laws, while another half (51%) say it should have “not much” or “no influence.” And 28% of U.S. adults say the Bible should have more influence than the will of the people if the two conflict. These numbers have remained virtually unchanged over the past four years.

Chart shows 28% of Americans say the Bible should prevail if Bible and the people’s will conflict

In the new survey, 16% of U.S. adults say the government should stop enforcing the separation of church and state. This is little changed since 2021.

Chart shows Views on church-state separation and the U.S. as a Christian nation

In response to a separate question, 13% of U.S. adults say the federal government should declare Christianity the official religion of the U.S., and 44% say the government should not declare the country a Christian nation but should promote Christian moral values. Meanwhile, 39% say the government should not elevate Christianity in either way. 1

Overall, 3% of U.S. adults say the Bible should have more influence on U.S. laws than the will of the people; and that the government should stop enforcing separation of church and state; and that Christianity should be declared the country’s official religion. And 13% of U.S. adults endorse two of these three statements. Roughly one-fifth of the public (22%) expresses one of these three views that are often associated with Christian nationalism. The majority (62%) expresses none.

Guide to this report

The remainder of this report describes these findings in additional detail.  Chapter 1  focuses on the public’s perceptions of religion’s role in public life. Chapter 2  examines views of presidential candidates and their religious engagement. And  Chapter 3  focuses on Christian nationalism and views of the U.S. as a Christian nation.

  • The share saying that the government should declare Christianity the official national religion (13%) is almost identical to the share who said the government should declare the U.S. a Christian nation in a March 2021 survey that asked a similar question (15%). ↩

Sign up for our Religion newsletter

Sent weekly on Wednesday

Report Materials

Table of contents, 5 facts about religion and americans’ views of donald trump, u.s. christians more likely than ‘nones’ to say situation at the border is a crisis, from businesses and banks to colleges and churches: americans’ views of u.s. institutions, most u.s. parents pass along their religion and politics to their children, growing share of americans see the supreme court as ‘friendly’ toward religion, most popular.

About Pew Research Center Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of The Pew Charitable Trusts .

Help | Advanced Search

Computer Science > Computer Vision and Pattern Recognition

Title: mm1: methods, analysis & insights from multimodal llm pre-training.

Abstract: In this work, we discuss building performant Multimodal Large Language Models (MLLMs). In particular, we study the importance of various architecture components and data choices. Through careful and comprehensive ablations of the image encoder, the vision language connector, and various pre-training data choices, we identified several crucial design lessons. For example, we demonstrate that for large-scale multimodal pre-training using a careful mix of image-caption, interleaved image-text, and text-only data is crucial for achieving state-of-the-art (SOTA) few-shot results across multiple benchmarks, compared to other published pre-training results. Further, we show that the image encoder together with image resolution and the image token count has substantial impact, while the vision-language connector design is of comparatively negligible importance. By scaling up the presented recipe, we build MM1, a family of multimodal models up to 30B parameters, including both dense models and mixture-of-experts (MoE) variants, that are SOTA in pre-training metrics and achieve competitive performance after supervised fine-tuning on a range of established multimodal benchmarks. Thanks to large-scale pre-training, MM1 enjoys appealing properties such as enhanced in-context learning, and multi-image reasoning, enabling few-shot chain-of-thought prompting.

Submission history

Access paper:.

  • Download PDF
  • Other Formats

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

What Nvidia’s new Blackwell chip says about AI’s carbon footprint problem

Nvidia CEO Jensen Huang holding the company's latest GB200 system.

Hello and welcome to Eye on AI. The biggest show in AI this week is Nvidia’s GTC developer conference in San Jose, Calif. One Wall Street analyst quipped the chipmaker’s confab was the equivalent of “AI Woodstock,” given all the heavy-hitters present from not just Nvidia, but companies such as OpenAI, xAI, Meta, Google, and Microsoft, and the presence of executives from major companies looking to implement AI, including L’Oréal, Lowe’s, Shell, and Verizon.

At GTC yesterday, Nvidia CEO Jensen Huang unveiled the company’s newest graphics processing unit (GPU), the kind of chips that have become the workhorses of AI. The forthcoming Blackwell GPU will have 208 billion transistors, far exceeding the 80 billion its current top-of-the-line H100 GPUs have. The larger chips mean they will be twice as fast at training AI models and five times faster at inference—the term for generating an output from an already trained AI model. Nvidia is also offering a powerful new GB200 “superchip” that would include two Blackwell GPUs coupled together with its Grace CPU and supersede the current Grace Hopper MGX units that Nvidia sells for use in data centers.

What’s interesting about the Blackwell is its power profile—and how Nvidia is using it to market the chip. Until recently, the trend has been that more powerful chips also consumed more energy, and Nvidia didn’t spend much effort trying to make energy efficiency a selling point, focusing instead on raw performance. But in unveiling the Blackwell, Huang emphasized how the new GPU’s greater processing speed meant that the power consumption during training was far less than with the H100 and earlier A100 chips. He said training the latest ultra-large AI models using 2,000 Blackwell GPUs would use 4 megawatts of power over 90 days of training, compared to having to use 8,000 older GPUs for the same period of time, which would consume 15 megawatts of power. That’s the difference between the hourly power consumption of 30,000 homes and just 8,000 homes.

Nvidia is talking about the Blackwell’s power profile because people are growing increasingly alarmed about both the monetary cost of AI and its carbon footprint. Those two factors are related since one reason cloud providers charge so much to run GPUs is not just the cost of the chips themselves, but the cost of the energy to run them (and to cool the data centers where they are housed since the chips also throw off more heat than conventional CPUs). And both those factors have made many companies reluctant to fully embrace the generative AI revolution because they are worried about the expense and about doing damage to net zero sustainability pledges. Nvidia knows this—hence its sudden emphasis on power consumption. The company has also pointed out that many AI experts working on open-source models have found ways to mimic some aspects of the performance of much larger, energy-intensive models such as GPT-4 but with models that are much smaller and less power-consuming.

Currently, data centers consume just over 1% of the world’s power, with estimates that AI is a fraction of that. Schneider Electric recently estimated that AI consumes about as much power annually as the nation of Cyprus. That number may climb rapidly due to AI. One expert at Microsoft has suggested that just the Nvidia H100s in deployment will consume about as much power as all of Phoenix by the end of this year.

Still, I have always thought the focus on AI’s energy consumption in data centers was a bit of a red herring, since most of the data centers of the cloud hyperscalers, which is where most AI is being run right now, are now powered by renewable energy or low-carbon nuclear power. And the fact that these companies are willing to contract for large amounts of renewable power at set prices has played a key role in giving renewable power companies the confidence to build large wind and solar power projects. The presence of these hyperscalers in the renewables market has meant there is more renewable power available for everyone. It’s a win-win. (Far more troubling is the water consumption needed to keep these data centers cool. Here consuming less power, and generating less heat, would have a more direct impact on sustainability.)

That said, AI is a global phenomenon, and there are some places where there isn’t much renewable power available. And if AI is adopted to the extent many project, and if AI models keep getting larger, it is possible renewable energy demand could outstrip low carbon supplies even in the U.S. and Europe. That’s one reason Microsoft has expressed interest in trying to use AI to speed up the process of getting new nuclear plants approved for construction in the U.S.

It is also true that AI’s energy consumption is among the many areas where our own brains are vastly superior to the artificial ones we’ve created. The human brain consumes about 0.3 kilowatt hours daily by burning calories, compared to about 10 kilowatt hours daily for the average H100. To really make AI ubiquitous without destroying the planet in the process, we may need to find a way to get artificial neural networks to operate with an energy profile that looks a bit more like the natural ones.

That’s essentially what the U.K.’s Advance Research and Invention Agency (Aria, which is the country’s answer to the U.S. Defense Department’s DARPA) is hoping to bring about. Last week, Aria announced it was committing £42 million ($53 million) to fund projects working towards reducing the current energy footprint of running AI applications by a factor of a thousand. It said it would consider radically different ways of building computer chips in order to do so, including chips that rely on biological neurons for computation instead of silicon transistors. (I wrote about one such effort in 2020.)

The effort is pretty sci-fi and may not yield the results Aria hopes. But the very fact the Aria challenge exists and that Nvidia is now putting energy efficiency on center stage at GTC are signs the world is getting serious about tackling AI’s carbon footprint. Hopefully, this means AI won’t destroy our efforts to build a more sustainable world.

There’s more AI news below. But first, if you’re enjoying reading this newsletter, how would you like to participate in a live version—chatting in-person and IRL with me and many of the world’s foremost experts on deploying AI within companies? If that sounds intriguing, please apply to attend Fortune’s Brainstorm A I conference in London on April 15 and 16 . I’ll be there cochairing the event and moderating sessions. You will get to hear from Google DeepMind’s Zoubin Ghahramani, Microsoft chief scientist Jaime Teevan, Salesforce chief ethical and human use officer Paula Goldman, as well as Shez Partovi, the chief innovation and strategy officer for Royal Philips, Accenture’s chief AI officer Lan Guan, Builder.ai CEO Sachin Dev Duggal, and many others. Email [email protected] to apply to attend. I hope to see you there!   

With that, here’s the AI news.

Jeremy Kahn [email protected] @jeremyakahn

AI IN THE NEWS

Microsoft hires DeepMind cofounder Suleyman and much of his Inflection team. Bloomberg reports that Microsoft is hiring former DeepMind cofounder Mustafa Suleyman, currently the founder and CEO of Inflection AI, to head a new consumer AI division, reporting directly to Microsoft CEO Satya Nadella. Many of those currently working with Suleyman are also moving over to Microsoft.

Inflection will continue as an independent company but shelve its Pi chatbot—which was designed to be an empathetic companion to users—and focus instead on selling AI solutions to business customers. Microsoft and Nvidia had previously invested in Inflection, which had raised $1.3 billion in a venture capital round in June that valued the company at a reported $4 billion. Inflection recently reported that Pi had one million active daily users. But Suleyman told Bloomberg that the company had not found a good business model for the chatbot.

Suleyman said that in his new role at Microsoft, he will be in charge of building compelling consumer products on top of underlying AI models, including both those Microsoft builds in-house and ones it receives through its partnership with OpenAI.

Apple reportedly in talks with Google to license Gemini models. That’s according to a Bloomberg story that cited unnamed sources familiar with the negotiations. The article said the iPhone maker is talking to Google about using Gemini to power cloud-based generative AI features on its phones, which might write documents or generate images, while it continues to work on building its own large language models that are expected to power a future generation of on-device AI applications.

DHS unveils AI roadmap. The Department of Homeland Security announced its AI roadmap this week. The roadmap will see the department pushing forward in three areas: using AI to promote DHS’s own mission, promoting nationwide AI safety and security, especially around critical national infrastructure, and cementing partnerships with both state and local governments and international partners. President Joe Biden’s October 2023 Executive Order on AI instructed DHS to prepare a plan to ensure the safety of AI systems used in critical infrastructure, like power grids, to reduce the risks that AI could be used to create bioweapons or other weapons of mass destruction. The executive summary contains some interesting ways DHS is already using AI to spot suspicious patterns of vehicle crossings at border points and locate child sexual abuse victims.

Chinese and Western academics issue stark warning on AI risks and share ‘red lines.’ Chinese and Western experts in AI and international security met in Beijing last week and agreed to several "red lines" that all nations should ensure AI does not cross, including AI with the ability to autonomously create bioweapons or perpetrate devastating cyberattacks, the Financial Times reported . The joint statement from the meeting of the International Dialogue on AI Safety was signed by several AI luminaries, including Turing Award winners and deep learning pioneers Geoffrey Hinton and Yoshua Bengio, Stuart Russell, a well-known computer scientist at the UC Berkeley, and Andrew Yao, among China’s most prominent computer scientists. Comparing the international agreement to similar agreements around weapons systems made during the Cold War, the group’s statement said that “humanity again needs to co-ordinate to avert a catastrophe that could arise from unprecedented technology.”

SEC fines two investment firms for “AI washing.” That’s the term for a company making misleading claims about its use of AI and the performance of AI systems. The SEC fined Toronto-based Delphia and San Francisco-based Global Predictions a combined $400,000 to settle civil charges, Reuters reported . The firms neither admitted nor denied wrongdoing as part of the agreement.

New pro-AI acceleration lobbying group sets up shop. Alliance for the Future, a new D.C. trade association affiliated with the effective accelerationist (or e/acc) movement has launched. The group wants to serve as a counterbalance to what it sees as undue influence in Washington policymaking circles from “AI doomer” groups that are concerned about AI’s existential risks. You can read more here .

EYE ON AI NUMBERS

That is how much training runs for the largest large language models may soon cost. That’s according to James Hamilton, a distinguished engineer at Amazon’s AWS. The figure appeared in slides for a talk Hamilton gave at a conference and recently posted to his blog . (H/t to Anthropic’s Jack Clark and his Import AI newsletter for highlighting the news.) In the slides, Hamilton noted that AWS had in the past year spent about $65 million training a 200 billion parameter model on 4 trillion tokens of data using 13,760 older generation Nvidia A100 chips for 48 days. But as Hamilton’s slides indicate, this is a “1 gen old” technology. Bigger models, trained on newer chips, probably cost 10 times as much. And the $1 billion figure comes from extrapolating those numbers out to the next generation of models.

FORTUNE ON AI

Sam Altman is over GPT-4: ‘I think it kind of sucks’ —by Chris Morris

How AI can make U.S. cities smarter, safer, and greener —by Nick Rockel

An AI platform set up by a college student has pulled down deepfake versions of Drake and Amy Winehouse after facing a landmark legal challenge from the U.K. music industry —by Ryan Hogg

How AI could help make the IVF process easier —by Alyssa Newcomb

AI CALENDAR

March 18-21: Nvidia GTC AI conference in San Jose, Calif.

April 15-16: Fortune Brainstorm AI London (Register here .)

May 7-11:  International Conference on Learning Representations (ICLR) in Vienna

June 25-27: 2024 IEEE Conference on Artificial Intelligence in Singapore

AI Doomers are a little weird. But so what? The New Yorker ran a big feature story by Andrew Marantz headlined “Among the A.I. Doomsayers” on the subculture of people in the Bay Area who are dedicated to AI safety research and trying to prevent what they fear may be extinction-level risks from advanced AI. Many of these people are affiliated with the philosophical movement Effective Altruism or the somewhat related Rationalist movement—both of which believe in applying cost-benefit analysis to figure out how to lead a moral life. They also live in group houses and enjoy cuddle puddles. The story is a fun read. But at the end of the day, I’m not quite sure what Marantz’s point is. I guess he wants us to question whether we should trust the AI Safety crowd’s pronouncements about AI gloom and doom because they are all a little weird. But the article doesn’t really try to engage with the core issue of whether AI actually might pose an existential risk to humanity, how big that risk might be, and what we should do about it. Instead, it basically pokes fun at the doomer lifestyle. But that doesn’t mean their ideas about AI necessarily are kooky. And given the consequences of getting this wrong, it would have been nice if The New Yorker had actually tried to examine the substance of the “AI doomer” vs. “e/acc” debate, rather than skimming the surface. 

This is the online version of Eye on AI, Fortune 's weekly newsletter on how AI is shaping the future of business. Sign up for free .

Latest in Newsletters

  • 0 minutes ago

what is the importance of survey research design

What I’m looking forward to at the first-ever One Earth Summit

what is the importance of survey research design

Stop focusing on the AI drama, says OpenAI chairman and startup cofounder Bret Taylor

what is the importance of survey research design

Tether’s CEO wants to be the Elon Musk of money

what is the importance of survey research design

Is that AI product or service the real deal? Here’s how to tell

what is the importance of survey research design

A pro-abortion rights ad signed by Yelp and Match Group was rejected by every major billboard in Times Square

Gen Z employees working together at a desk.

Gen Z is 8% happier at work than they were a year ago—But they’re still the most unhappy generation

Most popular.

what is the importance of survey research design

Bentley CEO says sales are down because the rich are experiencing ‘emotional sensitivity’ due to the cost of living and don’t want to flaunt their wealth with a new luxury car

what is the importance of survey research design

Ozempic maker Novo Nordisk has a new goal for its leadership team: Make sure no more than 10% of your staff are stressed

what is the importance of survey research design

Housing market data suggests the most optimistic buyers during the pandemic are more likely to stop paying their mortgages

what is the importance of survey research design

The Birkin bag rules: Two Californians sue Hermès, alleging their money wasn’t good enough even after one of them spent ‘tens of thousands of dollars’ on the brand

what is the importance of survey research design

‘I don’t want to pay a buyer’s agent’—homeowners are charged up after $418 million settlement, top real estate CEO says

what is the importance of survey research design

Elon Musk’s $250 billion Tesla losing streak takes another lurch downward on reports of a production cut at his China plant

IMAGES

  1. Good survey design with examples

    what is the importance of survey research design

  2. Types Of Research Design Ppt

    what is the importance of survey research design

  3. Survey Design

    what is the importance of survey research design

  4. Types Of Qualitative Research Design With Examples

    what is the importance of survey research design

  5. 12 Questionnaire Design Tips for Successful Surveys

    what is the importance of survey research design

  6. Survey Research: Defination, Advantages & Disadvantages

    what is the importance of survey research design

VIDEO

  1. What is research

  2. Survey Research Design

  3. Research Objectives, Research methodology and Questionnaire designing

  4. Part 1: Designing the Methodology

  5. SURVEY RESEARCH DESIGN

  6. Survey Method in Research #research #researchmethodology #shorts #psychology #surveymethod

COMMENTS

  1. Understanding and Evaluating Survey Research

    Survey research is defined as "the collection of information from a sample of individuals through their responses to questions" ( Check & Schutt, 2012, p. 160 ). This type of research allows for a variety of methods to recruit participants, collect data, and utilize various methods of instrumentation. Survey research can use quantitative ...

  2. Survey Research

    Survey research means collecting information about a group of people by asking them questions and analyzing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout.

  3. Survey research

    Survey research is a research method involving the use of standardised questionnaires or interviews to collect data about people and their preferences, thoughts, and behaviours in a systematic manner. Although census surveys were conducted as early as Ancient Egypt, survey as a formal research method was pioneered in the 1930-40s by sociologist Paul Lazarsfeld to examine the effects of the ...

  4. Why are surveys important in research?

    Surveys provide researchers with reliable, usable, primary data to inform business decisions. They are important because the data comes directly from the individuals you have identified in your goal. And surveys give you a detailed, systematic way to view and analyze your data.

  5. A quick guide to survey research

    Medical research questionnaires or surveys are vital tools used to gather information on individual perspectives in a large cohort. Within the medical realm, there are three main types of survey: epidemiological surveys, surveys on attitudes to a health service or intervention and questionnaires assessing knowledge on a particular issue or topic. 1

  6. Writing Survey Questions

    Writing Survey Questions. Perhaps the most important part of the survey process is the creation of questions that accurately measure the opinions, experiences and behaviors of the public. Accurate random sampling will be wasted if the information gathered is built on a shaky foundation of ambiguous or biased questions.

  7. PDF Effective survey design for research: Asking the right questions to get

    Conducting survey research requires several component processes: survey design, sampling and recruitment, data collection, data analysis, and reporting. This guide will focus primarily on survey design, but these other steps should inform your survey design. Attending to matters of diversity, equity, and inclusion is also important because they ...

  8. 12.1 What is survey research, and when should you use it?

    In this textbook, we define a survey as a research design in which a researcher poses a set of predetermined questions to an entire group, or sample, of individuals. That set of questions is the questionnaire, a research instrument consisting of a set of questions (items) intended to capture responses from participants in a standardized manner.

  9. Research Design

    A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about: ... Survey methods. Surveys allow you to collect data about opinions, behaviours, ... Planning systematic procedures is especially important in quantitative research, where you need to precisely ...

  10. A Short Introduction to Survey Research

    3.3 The Importance of Survey Research in the Social Sciences and Beyond. Survey research is one of the pillars in social science research in the twenty-first century. Surveys are used to measure almost everything from voting behavior to public opinion and to sexual preferences (De Leeuw et al. 2008: 1).

  11. The Importance of Survey Research Standards

    The Importance of Survey Research Standards. Jack E. Fincham, PhD a,, b and JoLaine R ... 2011 issue provided summary guidance for survey research published in their journal. 22 In this excellent summary of good research practices relative to survey design and reporting, 5 references are listed. 23-27 These seminal references provide explicit ...

  12. PDF Fundamentals of Survey Research Methodology

    The survey is then constructed to test this model against observations of the phenomena. In contrast to survey research, a . survey. is simply a data collection tool for carrying out survey research. Pinsonneault and Kraemer (1993) defined a survey as a "means for gathering information about the characteristics, actions, or opinions of a ...

  13. Overview of Survey Research

    is a quantitative and qualitative method with two important characteristics. First, the variables of interest are measured using self-reports. In essence, survey researchers ask their participants (who are often called. in survey research) to report directly on their own thoughts, feelings, and behaviours. Second, considerable attention is paid ...

  14. What Is a Research Design

    A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about: Your overall research objectives and approach. Whether you'll rely on primary research or secondary research. Your sampling methods or criteria for selecting subjects. Your data collection methods.

  15. 3. Survey design and the research process

    Topics are the broad areas of research that a survey designer will investigate and work on to refine in preparation for researching via a survey. Topics are the entry point to survey design and contain theories and perspectives. Studies associated with them entail various methodological approaches reflecting the research on them to a point in time.

  16. What Is A Survey And Why Does Survey Research Matter?

    Survey is a research method that involves collecting information about people's preferences, behaviors and thoughts. Researchers can conduct surveys in multiple ways depending on the chosen study objective. There are numerous survey research methods, such as Interviews, Focus Groups, Mailed Questionnaires, Online questionnaires, and much more.

  17. Survey Research Design

    The survey research design is a very valuable tool for assessing opinions and trends. Even on a small scale, such as local government or small businesses, judging opinion with carefully designed surveys can dramatically change strategies. Television chat-shows and newspapers are usually full of facts and figures gleaned from surveys but often ...

  18. PDF Survey Design

    A survey is a systematic method for gathering information from (a sample of) entities for the purposes of constructing quantitative descriptors of the attributes of the larger population of which the entities are members. Surveys are conducted to gather information that reflects population's attitudes, behaviors, opinions and beliefs that ...

  19. Survey Research: An Effective Design for Conducting Nursing Research

    An important advantage of survey research is its flexibility. Surveys can be used to conduct large national studies or to query small groups. Surveys can be made up of a few unstructured questions or can involve a large-scale, multisite longitudinal study with multiple highly validated questionnaires. Regardless of the study's degree of sophistication and rigor, nurses must understand how to ...

  20. (PDF) The Importance of Survey Research Standards

    The Importance of Survey Research Standards. February 2013; American Journal of Pharmaceutical Education 77(1):4 ... A survey design was considered appropriate in this study because it gives room ...

  21. Survey Descriptive Research: Design & Examples

    Here are some of the most important advantages of conducting a survey descriptive research: 1. Quantitativeness and qualitatively. The descriptive survey research design uses both quantitative and qualitative research methods. It is used primarily to conduct quantitative research and gather data that is statistically easy to analyze.

  22. Research Design

    Research design is important because it guides the entire research process and ensures that the study is conducted in a systematic and rigorous manner. Types of Research Design. ... Researchers use various designs, such as survey research, experimental research, and case studies, to study different aspects of the business world.

  23. Survey research

    Abstract. Survey research is a unique methodology that can provide insight into individuals' perspectives and experiences and can be collected on a large population-based sample. Specifically, in plastic surgery, survey research can provide patients and providers with accurate and reproducible information to assist with medical decision-making.

  24. National cross-disciplinary research ethics and integrity study

    Diverse research practices and methodologies across various disciplines have contributed to the development of distinct viewpoints regarding what constitutes a meaningful contribution to a research article. The survey results allowed us to classify the authorship criteria into two categories: those considered essential for ensuring authorship ...

  25. Importance of residency applicant factors based on specialty and

    Background With the transition away from traditional numerical grades/scores, residency applicant factors such as service, research, leadership, and extra-curricular activities may become more critical in the application process. Objective To assess the importance of residency application factors reported by program directors (PDs), stratified by director demographics and specialty. Method A ...

  26. 8 in 10 Americans Say Religion Is Losing Influence in Public Life

    Pew Research Center conducted this survey to explore Americans' attitudes about religion's role in public life, including politics in a presidential election year. For this report, we surveyed 12,693 respondents from Feb. 13 to 25, 2024. Most of the respondents (10,642) are members of the ...

  27. MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training

    In this work, we discuss building performant Multimodal Large Language Models (MLLMs). In particular, we study the importance of various architecture components and data choices. Through careful and comprehensive ablations of the image encoder, the vision language connector, and various pre-training data choices, we identified several crucial design lessons. For example, we demonstrate that ...

  28. What Nvidia's new Blackwell GPU says about the importance of AI's

    Still, I have always thought the focus on AI's energy consumption in data centers was a bit of a red herring, since most of the data centers of the cloud hyperscalers, which is where most AI is ...