research is based on valid procedures and principles

Community Blog

Keep up-to-date on postgraduate related issues with our quick reads written by students, postdocs, professors and industry leaders.

What is Research? – Purpose of Research

DiscoverPhDs

  • By DiscoverPhDs
  • September 10, 2020

Purpose of Research - What is Research

The purpose of research is to enhance society by advancing knowledge through the development of scientific theories, concepts and ideas. A research purpose is met through forming hypotheses, collecting data, analysing results, forming conclusions, implementing findings into real-life applications and forming new research questions.

What is Research

Simply put, research is the process of discovering new knowledge. This knowledge can be either the development of new concepts or the advancement of existing knowledge and theories, leading to a new understanding that was not previously known.

As a more formal definition of research, the following has been extracted from the Code of Federal Regulations :

research is based on valid procedures and principles

While research can be carried out by anyone and in any field, most research is usually done to broaden knowledge in the physical, biological, and social worlds. This can range from learning why certain materials behave the way they do, to asking why certain people are more resilient than others when faced with the same challenges.

The use of ‘systematic investigation’ in the formal definition represents how research is normally conducted – a hypothesis is formed, appropriate research methods are designed, data is collected and analysed, and research results are summarised into one or more ‘research conclusions’. These research conclusions are then shared with the rest of the scientific community to add to the existing knowledge and serve as evidence to form additional questions that can be investigated. It is this cyclical process that enables scientific research to make continuous progress over the years; the true purpose of research.

What is the Purpose of Research

From weather forecasts to the discovery of antibiotics, researchers are constantly trying to find new ways to understand the world and how things work – with the ultimate goal of improving our lives.

The purpose of research is therefore to find out what is known, what is not and what we can develop further. In this way, scientists can develop new theories, ideas and products that shape our society and our everyday lives.

Although research can take many forms, there are three main purposes of research:

  • Exploratory: Exploratory research is the first research to be conducted around a problem that has not yet been clearly defined. Exploration research therefore aims to gain a better understanding of the exact nature of the problem and not to provide a conclusive answer to the problem itself. This enables us to conduct more in-depth research later on.
  • Descriptive: Descriptive research expands knowledge of a research problem or phenomenon by describing it according to its characteristics and population. Descriptive research focuses on the ‘how’ and ‘what’, but not on the ‘why’.
  • Explanatory: Explanatory research, also referred to as casual research, is conducted to determine how variables interact, i.e. to identify cause-and-effect relationships. Explanatory research deals with the ‘why’ of research questions and is therefore often based on experiments.

Characteristics of Research

There are 8 core characteristics that all research projects should have. These are:

  • Empirical  – based on proven scientific methods derived from real-life observations and experiments.
  • Logical  – follows sequential procedures based on valid principles.
  • Cyclic  – research begins with a question and ends with a question, i.e. research should lead to a new line of questioning.
  • Controlled  – vigorous measures put into place to keep all variables constant, except those under investigation.
  • Hypothesis-based  – the research design generates data that sufficiently meets the research objectives and can prove or disprove the hypothesis. It makes the research study repeatable and gives credibility to the results.
  • Analytical  – data is generated, recorded and analysed using proven techniques to ensure high accuracy and repeatability while minimising potential errors and anomalies.
  • Objective  – sound judgement is used by the researcher to ensure that the research findings are valid.
  • Statistical treatment  – statistical treatment is used to transform the available data into something more meaningful from which knowledge can be gained.

Finding a PhD has never been this easy – search for a PhD by keyword, location or academic area of interest.

Types of Research

Research can be divided into two main types: basic research (also known as pure research) and applied research.

Basic Research

Basic research, also known as pure research, is an original investigation into the reasons behind a process, phenomenon or particular event. It focuses on generating knowledge around existing basic principles.

Basic research is generally considered ‘non-commercial research’ because it does not focus on solving practical problems, and has no immediate benefit or ways it can be applied.

While basic research may not have direct applications, it usually provides new insights that can later be used in applied research.

Applied Research

Applied research investigates well-known theories and principles in order to enhance knowledge around a practical aim. Because of this, applied research focuses on solving real-life problems by deriving knowledge which has an immediate application.

Methods of Research

Research methods for data collection fall into one of two categories: inductive methods or deductive methods.

Inductive research methods focus on the analysis of an observation and are usually associated with qualitative research. Deductive research methods focus on the verification of an observation and are typically associated with quantitative research.

Research definition

Qualitative Research

Qualitative research is a method that enables non-numerical data collection through open-ended methods such as interviews, case studies and focus groups .

It enables researchers to collect data on personal experiences, feelings or behaviours, as well as the reasons behind them. Because of this, qualitative research is often used in fields such as social science, psychology and philosophy and other areas where it is useful to know the connection between what has occurred and why it has occurred.

Quantitative Research

Quantitative research is a method that collects and analyses numerical data through statistical analysis.

It allows us to quantify variables, uncover relationships, and make generalisations across a larger population. As a result, quantitative research is often used in the natural and physical sciences such as engineering, biology, chemistry, physics, computer science, finance, and medical research, etc.

What does Research Involve?

Research often follows a systematic approach known as a Scientific Method, which is carried out using an hourglass model.

A research project first starts with a problem statement, or rather, the research purpose for engaging in the study. This can take the form of the ‘ scope of the study ’ or ‘ aims and objectives ’ of your research topic.

Subsequently, a literature review is carried out and a hypothesis is formed. The researcher then creates a research methodology and collects the data.

The data is then analysed using various statistical methods and the null hypothesis is either accepted or rejected.

In both cases, the study and its conclusion are officially written up as a report or research paper, and the researcher may also recommend lines of further questioning. The report or research paper is then shared with the wider research community, and the cycle begins all over again.

Although these steps outline the overall research process, keep in mind that research projects are highly dynamic and are therefore considered an iterative process with continued refinements and not a series of fixed stages.

Do you need to have published papers to do a PhD?

Do you need to have published papers to do a PhD? The simple answer is no but it could benefit your application if you can.

Productive working

Learn more about using cloud storage effectively, video conferencing calling, good note-taking solutions and online calendar and task management options.

What is Scientific Misconduct?

Scientific misconduct can be described as a deviation from the accepted standards of scientific research, study and publication ethics.

Join thousands of other students and stay up to date with the latest PhD programmes, funding opportunities and advice.

research is based on valid procedures and principles

Browse PhDs Now

PhD_Synopsis_Format_Guidance

This article will answer common questions about the PhD synopsis, give guidance on how to write one, and provide my thoughts on samples.

Purpose of Research - What is Research

The purpose of research is to enhance society by advancing knowledge through developing scientific theories, concepts and ideas – find out more on what this involves.

research is based on valid procedures and principles

Rakhi is a PhD student at the Institute of Chemical Technology, Mumbai, India. Her research is on the production of Borneol and Menthol and development of separation process from the reaction mixture.

research is based on valid procedures and principles

Dr Anwar gained her PhD in Biochemistry from the University of Helsinki in 2019. She is now pursuing a career within industry and becoming more active in science outreach.

Join Thousands of Students

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

National Academy of Sciences (US), National Academy of Engineering (US) and Institute of Medicine (US) Panel on Scientific Responsibility and the Conduct of Research. Responsible Science: Ensuring the Integrity of the Research Process: Volume I. Washington (DC): National Academies Press (US); 1992.

Cover of Responsible Science

Responsible Science: Ensuring the Integrity of the Research Process: Volume I.

  • Hardcopy Version at National Academies Press

2 Scientific Principles and Research Practices

Until the past decade, scientists, research institutions, and government agencies relied solely on a system of self-regulation based on shared ethical principles and generally accepted research practices to ensure integrity in the research process. Among the very basic principles that guide scientists, as well as many other scholars, are those expressed as respect for the integrity of knowledge, collegiality, honesty, objectivity, and openness. These principles are at work in the fundamental elements of the scientific method, such as formulating a hypothesis, designing an experiment to test the hypothesis, and collecting and interpreting data. In addition, more particular principles characteristic of specific scientific disciplines influence the methods of observation; the acquisition, storage, management, and sharing of data; the communication of scientific knowledge and information; and the training of younger scientists. 1 How these principles are applied varies considerably among the several scientific disciplines, different research organizations, and individual investigators.

The basic and particular principles that guide scientific research practices exist primarily in an unwritten code of ethics. Although some have proposed that these principles should be written down and formalized, 2 the principles and traditions of science are, for the most part, conveyed to successive generations of scientists through example, discussion, and informal education. As was pointed out in an early Academy report on responsible conduct of research in the health sciences, “a variety of informal and formal practices and procedures currently exist in the academic research environment to assure and maintain the high quality of research conduct” (IOM, 1989a, p. 18).

Physicist Richard Feynman invoked the informal approach to communicating the basic principles of science in his 1974 commencement address at the California Institute of Technology (Feynman, 1985):

[There is an] idea that we all hope you have learned in studying science in school—we never explicitly say what this is, but just hope that you catch on by all the examples of scientific investigation. . . . It's a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty—a kind of leaning over backwards. For example, if you're doing an experiment, you should report everything that you think might make it invalid—not only what you think is right about it; other causes that could possibly explain your results; and things you thought of that you've eliminated by some other experiment, and how they worked—to make sure the other fellow can tell they have been eliminated.

Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can—if you know anything at all wrong, or possibly wrong—to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. In summary, the idea is to try to give all the information to help others to judge the value of your contribution, not just the information that leads to judgment in one particular direction or another. (pp. 311-312)

Many scholars have noted the implicit nature and informal character of the processes that often guide scientific research practices and inference. 3 Research in well-established fields of scientific knowledge, guided by commonly accepted theoretical paradigms and experimental methods, involves few disagreements about what is recognized as sound scientific evidence. Even in a revolutionary scientific field like molecular biology, students and trainees have learned the basic principles governing judgments made in such standardized procedures as cloning a new gene and determining its sequence.

In evaluating practices that guide research endeavors, it is important to consider the individual character of scientific fields. Research fields that yield highly replicable results, such as ordinary organic chemical structures, are quite different from fields such as cellular immunology, which are in a much earlier stage of development and accumulate much erroneous or uninterpretable material before the pieces fit together coherently. When a research field is too new or too fragmented to support consensual paradigms or established methods, different scientific practices can emerge.

THE NATURE OF SCIENCE

In broadest terms, scientists seek a systematic organization of knowledge about the universe and its parts. This knowledge is based on explanatory principles whose verifiable consequences can be tested by independent observers. Science encompasses a large body of evidence collected by repeated observations and experiments. Although its goal is to approach true explanations as closely as possible, its investigators claim no final or permanent explanatory truths. Science changes. It evolves. Verifiable facts always take precedence. . . .

Scientists operate within a system designed for continuous testing, where corrections and new findings are announced in refereed scientific publications. The task of systematizing and extending the understanding of the universe is advanced by eliminating disproved ideas and by formulating new tests of others until one emerges as the most probable explanation for any given observed phenomenon. This is called the scientific method.

An idea that has not yet been sufficiently tested is called a hypothesis. Different hypotheses are sometimes advanced to explain the same factual evidence. Rigor in the testing of hypotheses is the heart of science, if no verifiable tests can be formulated, the idea is called an ad hoc hypothesis—one that is not fruitful; such hypotheses fail to stimulate research and are unlikely to advance scientific knowledge.

A fruitful hypothesis may develop into a theory after substantial observational or experimental support has accumulated. When a hypothesis has survived repeated opportunities for disproof and when competing hypotheses have been eliminated as a result of failure to produce the predicted consequences, that hypothesis may become the accepted theory explaining the original facts.

Scientific theories are also predictive. They allow us to anticipate yet unknown phenomena and thus to focus research on more narrowly defined areas. If the results of testing agree with predictions from a theory, the theory is provisionally corroborated. If not, it is proved false and must be either abandoned or modified to account for the inconsistency.

Scientific theories, therefore, are accepted only provisionally. It is always possible that a theory that has withstood previous testing may eventually be disproved. But as theories survive more tests, they are regarded with higher levels of confidence. . . .

In science, then, facts are determined by observation or measurement of natural or experimental phenomena. A hypothesis is a proposed explanation of those facts. A theory is a hypothesis that has gained wide acceptance because it has survived rigorous investigation of its predictions. . . .

. . . science accommodates, indeed welcomes, new discoveries: its theories change and its activities broaden as new facts come to light or new potentials are recognized. Examples of events changing scientific thought are legion. . . . Truly scientific understanding cannot be attained or even pursued effectively when explanations not derived from or tested by the scientific method are accepted.

SOURCE: National Academy of Sciences and National Research Council (1984), pp. 8-11.

A well-established discipline can also experience profound changes during periods of new conceptual insights. In these moments, when scientists must cope with shifting concepts, the matter of what counts as scientific evidence can be subject to dispute. Historian Jan Sapp has described the complex interplay between theory and observation that characterizes the operation of scientific judgment in the selection of research data during revolutionary periods of paradigmatic shift (Sapp, 1990, p. 113):

What “liberties” scientists are allowed in selecting positive data and omitting conflicting or “messy” data from their reports is not defined by any timeless method. It is a matter of negotiation. It is learned, acquired socially; scientists make judgments about what fellow scientists might expect in order to be convincing. What counts as good evidence may be more or less well-defined after a new discipline or specialty is formed; however, at revolutionary stages in science, when new theories and techniques are being put forward, when standards have yet to be negotiated, scientists are less certain as to what others may require of them to be deemed competent and convincing.

Explicit statements of the values and traditions that guide research practice have evolved through the disciplines and have been given in textbooks on scientific methodologies. 4 In the past few decades, many scientific and engineering societies representing individual disciplines have also adopted codes of ethics (see Volume II of this report for examples), 5 and more recently, a few research institutions have developed guidelines for the conduct of research (see Chapter 6 ).

But the responsibilities of the research community and research institutions in assuring individual compliance with scientific principles, traditions, and codes of ethics are not well defined. In recent years, the absence of formal statements by research institutions of the principles that should guide research conducted by their members has prompted criticism that scientists and their institutions lack a clearly identifiable means to ensure the integrity of the research process.

  • FACTORS AFFECTING THE DEVELOPMENT OF RESEARCH PRACTICES

In all of science, but with unequal emphasis in the several disciplines, inquiry proceeds based on observation and experimentation, the exercising of informed judgment, and the development of theory. Research practices are influenced by a variety of factors, including:

The general norms of science;

The nature of particular scientific disciplines and the traditions of organizing a specific body of scientific knowledge;

The example of individual scientists, particularly those who hold positions of authority or respect based on scientific achievements;

The policies and procedures of research institutions and funding agencies; and

Socially determined expectations.

The first three factors have been important in the evolution of modern science. The latter two have acquired more importance in recent times.

Norms of Science

As members of a professional group, scientists share a set of common values, aspirations, training, and work experiences. 6 Scientists are distinguished from other groups by their beliefs about the kinds of relationships that should exist among them, about the obligations incurred by members of their profession, and about their role in society. A set of general norms are imbedded in the methods and the disciplines of science that guide individual, scientists in the organization and performance of their research efforts and that also provide a basis for nonscientists to understand and evaluate the performance of scientists.

But there is uncertainty about the extent to which individual scientists adhere to such norms. Most social scientists conclude that all behavior is influenced to some degree by norms that reflect socially or morally supported patterns of preference when alternative courses of action are possible. However, perfect conformity with any relevant set of norms is always lacking for a variety of reasons: the existence of competing norms, constraints, and obstacles in organizational or group settings, and personality factors. The strength of these influences, and the circumstances that may affect them, are not well understood.

In a classic statement of the importance of scientific norms, Robert Merton specified four norms as essential for the effective functioning of science: communism (by which Merton meant the communal sharing of ideas and findings), universalism, disinterestedness, and organized skepticism (Merton, 1973). Neither Merton nor other sociologists of science have provided solid empirical evidence for the degree of influence of these norms in a representative sample of scientists. In opposition to Merton, a British sociologist of science, Michael Mulkay, has argued that these norms are “ideological” covers for self-interested behavior that reflects status and politics (Mulkay, 1975). And the British physicist and sociologist of science John Ziman, in an article synthesizing critiques of Merton's formulation, has specified a set of structural factors in the bureaucratic and corporate research environment that impede the realization of that particular set of norms: the proprietary nature of research, the local importance and funding of research, the authoritarian role of the research manager, commissioned research, and the required expertise in understanding how to use modern instruments (Ziman, 1990).

It is clear that the specific influence of norms on the development of scientific research practices is simply not known and that further study of key determinants is required, both theoretically and empirically. Commonsense views, ideologies, and anecdotes will not support a conclusive appraisal.

Individual Scientific Disciplines

Science comprises individual disciplines that reflect historical developments and the organization of natural and social phenomena for study. Social scientists may have methods for recording research data that differ from the methods of biologists, and scientists who depend on complex instrumentation may have authorship practices different from those of scientists who work in small groups or carry out field studies. Even within a discipline, experimentalists engage in research practices that differ from the procedures followed by theorists.

Disciplines are the “building blocks of science,” and they “designate the theories, problems, procedures, and solutions that are prescribed, proscribed, permitted, and preferred” (Zuckerman, 1988a, p. 520). The disciplines have traditionally provided the vital connections between scientific knowledge and its social organization. Scientific societies and scientific journals, some of which have tens of thousands of members and readers, and the peer review processes used by journals and research sponsors are visible forms of the social organization of the disciplines.

The power of the disciplines to shape research practices and standards is derived from their ability to provide a common frame of reference in evaluating the significance of new discoveries and theories in science. It is the members of a discipline, for example, who determine what is “good biology” or “good physics” by examining the implications of new research results. The disciplines' abilities to influence research standards are affected by the subjective quality of peer review and the extent to which factors other than disciplinary quality may affect judgments about scientific achievements. Disciplinary departments rely primarily on informal social and professional controls to promote responsible behavior and to penalize deviant behavior. These controls, such as social ostracism, the denial of letters of support for future employment, and the withholding of research resources, can deter and penalize unprofessional behavior within research institutions. 7

Many scientific societies representing individual disciplines have adopted explicit standards in the form of codes of ethics or guidelines governing, for example, the editorial practices of their journals and other publications. 8 Many societies have also established procedures for enforcing their standards. In the past decade, the societies' codes of ethics—which historically have been exhortations to uphold high standards of professional behavior—have incorporated specific guidelines relevant to authorship practices, data management, training and mentoring, conflict of interest, reporting research findings, treatment of confidential or proprietary information, and addressing error or misconduct.

The Role of Individual Scientists and Research Teams

The methods by which individual scientists and students are socialized in the principles and traditions of science are poorly understood. The principles of science and the practices of the disciplines are transmitted by scientists in classroom settings and, perhaps more importantly, in research groups and teams. The social setting of the research group is a strong and valuable characteristic of American science and education. The dynamics of research groups can foster—or inhibit—innovation, creativity, education, and collaboration.

One author of a historical study of research groups in the chemical and biochemical sciences has observed that the laboratory director or group leader is the primary determinant of a group's practices (Fruton, 1990). Individuals in positions of authority are visible and are also influential in determining funding and other support for the career paths of their associates and students. Research directors and department chairs, by virtue of personal example, thus can reinforce, or weaken, the power of disciplinary standards and scientific norms to affect research practices.

To the extent that the behavior of senior scientists conforms with general expectations for appropriate scientific and disciplinary practice, the research system is coherent and mutually reinforcing. When the behavior of research directors or department chairs diverges from expectations for good practice, however, the expected norms of science become ambiguous, and their effects are thus weakened. Thus personal example and the perceived behavior of role models and leaders in the research community can be powerful stimuli in shaping the research practices of colleagues, associates, and students.

The role of individuals in influencing research practices can vary by research field, institution, or time. The standards and expectations for behavior exemplified by scientists who are highly regarded for their technical competence or creative insight may have greater influence than the standards of others. Individual and group behaviors may also be more influential in times of uncertainty and change in science, especially when new scientific theories, paradigms, or institutional relationships are being established.

Institutional Policies

Universities, independent institutes, and government and industrial research organizations create the environment in which research is done. As the recipients of federal funds and the institutional sponsors of research activities, administrative officers must comply with regulatory and legal requirements that accompany public support. They are required, for example, “to foster a research environment that discourages misconduct in all research and that deals forthrightly with possible misconduct” (DHHS, 1989a, p. 32451).

Academic institutions traditionally have relied on their faculty to ensure that appropriate scientific and disciplinary standards are maintained. A few universities and other research institutions have also adopted policies or guidelines to clarify the principles that their members are expected to observe in the conduct of scientific research. 9 In addition, as a result of several highly publicized incidents of misconduct in science and the subsequent enactment of governmental regulations, most major research institutions have now adopted policies and procedures for handling allegations of misconduct in science.

Institutional policies governing research practices can have a powerful effect on research practices if they are commensurate with the norms that apply to a wide spectrum of research investigators. In particular, the process of adopting and implementing strong institutional policies can sensitize the members of those institutions to the potential for ethical problems in their work. Institutional policies can establish explicit standards that institutional officers then have the power to enforce with sanctions and penalties.

Institutional policies are limited, however, in their ability to specify the details of every problematic situation, and they can weaken or displace individual professional judgment in such situations. Currently, academic institutions have very few formal policies and programs in specific areas such as authorship, communication and publication, and training and supervision.

Government Regulations and Policies

Government agencies have developed specific rules and procedures that directly affect research practices in areas such as laboratory safety, the treatment of human and animal research subjects, and the use of toxic or potentially hazardous substances in research.

But policies and procedures adopted by some government research agencies to address misconduct in science (see Chapter 5 ) represent a significant new regulatory development in the relationships between research institutions and government sponsors. The standards and criteria used to monitor institutional compliance with an increasing number of government regulations and policies affecting research practices have been a source of significant disagreement and tension within the research community.

In recent years, some government research agencies have also adopted policies and procedures for the treatment of research data and materials in their extramural research programs. For example, the National Science Foundation (NSF) has implemented a data-sharing policy through program management actions, including proposal review and award negotiations and conditions. The NSF policy acknowledges that grantee institutions will “keep principal rights to intellectual property conceived under NSF sponsorship” to encourage appropriate commercialization of the results of research (NSF, 1989b, p. 1). However, the NSF policy emphasizes “that retention of such rights does not reduce the responsibility of researchers and institutions to make results and supporting materials openly accessible” (p. 1).

In seeking to foster data sharing under federal grant awards, the government relies extensively on the scientific traditions of openness and sharing. Research agency officials have observed candidly that if the vast majority of scientists were not so committed to openness and dissemination, government policy might require more aggressive action. But the principles that have traditionally characterized scientific inquiry can be difficult to maintain. For example, NSF staff have commented, “Unless we can arrange real returns or incentives for the original investigator, either in financial support or in professional recognition, another researcher's request for sharing is likely to present itself as ‘hassle'—an unwelcome nuisance and diversion. Therefore, we should hardly be surprised if researchers display some reluctance to share in practice, however much they may declare and genuinely feel devotion to the ideal of open scientific communication” (NSF, 1989a, p. 4).

Social Attitudes and Expectations

Research scientists are part of a larger human society that has recently experienced profound changes in attitudes about ethics, morality, and accountability in business, the professions, and government. These attitudes have included greater skepticism of the authority of experts and broader expectations about the need for visible mechanisms to assure proper research practices, especially in areas that affect the public welfare. Social attitudes are also having a more direct influence on research practices as science achieves a more prominent and public role in society. In particular, concern about waste, fraud, and abuse involving government funds has emerged as a factor that now directly influences the practices of the research community.

Varying historical and conceptual perspectives also can affect expectations about standards of research practice. For example, some journalists have criticized several prominent scientists, such as Mendel, Newton, and Millikan, because they “cut corners in order to make their theories prevail” (Broad and Wade, 1982, p. 35). The criticism suggests that all scientists at all times, in all phases of their work, should be bound by identical standards.

Yet historical studies of the social context in which scientific knowledge has been attained suggest that modern criticism of early scientific work often imposes contemporary standards of objectivity and empiricism that have in fact been developed in an evolutionary manner. 10 Holton has argued, for example, that in selecting data for publication, Millikan exercised creative insight in excluding unreliable data resulting from experimental error. But such practices, by today's standards, would not be acceptable without reporting the justification for omission of recorded data.

In the early stages of pioneering studies, particularly when fundamental hypotheses are subject to change, scientists must be free to use creative judgment in deciding which data are truly significant. In such moments, the standards of proof may be quite different from those that apply at stages when confirmation and consensus are sought from peers. Scientists must consistently guard against self-deception, however, particularly when theoretical prejudices tend to overwhelm the skepticism and objectivity basic to experimental practices.

In discussing “the theory-ladenness of observations,” Sapp (1990) observed the fundamental paradox that can exist in determining the “appropriateness” of data selection in certain experiments done in the past: scientists often craft their experiments so that the scientific problems and research subjects conform closely with the theory that they expect to verify or refute. Thus, in some cases, their observations may come closer to theoretical expectations than what might be statistically proper.

This source of bias may be acceptable when it is influenced by scientific insight and judgment. But political, financial, or other sources of bias can corrupt the process of data selection. In situations where both kinds of influence exist, it is particularly important for scientists to be forthcoming about possible sources of bias in the interpretation of research results. The coupling of science to other social purposes in fostering economic growth and commercial technology requires renewed vigilance to maintain acceptable standards for disclosure and control of financial or competitive conflicts of interest and bias in the research environment. The failure to distinguish between appropriate and inappropriate sources of bias in research practices can lead to erosion of public trust in the autonomy of the research enterprise.

  • RESEARCH PRACTICES

In reviewing modern research practices for a range of disciplines, and analyzing factors that could affect the integrity of the research process, the panel focused on the following four areas:

Data handling—acquisition, management, and storage;

Communication and publication;

Correction of errors; and

Research training and mentorship.

Commonly understood practices operate in each area to promote responsible research conduct; nevertheless, some questionable research practices also occur. Some research institutions, scientific societies, and journals have established policies to discourage questionable practices, but there is not yet a consensus on how to treat violations of these policies. 11 Furthermore, there is concern that some questionable practices may be encouraged or stimulated by other institutional factors. For example, promotion or appointment policies that stress quantity rather than the quality of publications as a measure of productivity could contribute to questionable practices.

Data Handling

Acquisition and management.

Scientific experiments and measurements are transformed into research data. The term “research data” applies to many different forms of scientific information, including raw numbers and field notes, machine tapes and notebooks, edited and categorized observations, interpretations and analyses, derived reagents and vectors, and tables, charts, slides, and photographs.

Research data are the basis for reporting discoveries and experimental results. Scientists traditionally describe the methods used for an experiment, along with appropriate calibrations, instrument types, the number of repeated measurements, and particular conditions that may have led to the omission of some datain the reported version. Standard procedures, innovations for particular purposes, and judgments concerning the data are also reported. The general standard of practice is to provide information that is sufficiently complete so that another scientist can repeat or extend the experiment.

When a scientist communicates a set of results and a related piece of theory or interpretation in any form (at a meeting, in a journal article, or in a book), it is assumed that the research has been conducted as reported. It is a violation of the most fundamental aspect of the scientific research process to set forth measurements that have not, in fact, been performed (fabrication) or to ignore or change relevant data that contradict the reported findings (falsification).

On occasion what is actually proper research practice may be confused with misconduct in science. Thus, for example, applying scientific judgment to refine data and to remove spurious results places special responsibility on the researcher to avoid misrepresentation of findings. Responsible practice requires that scientists disclose the basis for omitting or modifying data in their analyses of research results, especially when such omissions or modifications could alter the interpretation or significance of their work.

In the last decade, the methods by which research scientists handle, store, and provide access to research data have received increased scrutiny, owing to conflicts, over ownership, such as those described by Nelkin (1984); advances in the methods and technologies that are used to collect, retain, and share data; and the costs of data storage. More specific concerns have involved the profitability associated with the patenting of science-based results in some fields and the need to verify independently the accuracy of research results used in public or private decision making. In resolving competing claims, the interests of individual scientists and research institutions may not always coincide: researchers may be willing to exchange scientific data of possible economic significance without regard for financial or institutional implications, whereas their institutions may wish to establish intellectual property rights and obligations prior to any disclosure.

The general norms of science emphasize the principle of openness. Scientists are generally expected to exchange research data as well as unique research materials that are essential to the replication or extension of reported findings. The 1985 report Sharing Research Data concluded that the general principle of data sharing is widely accepted, especially in the behavioral and social sciences (NRC, 1985). The report catalogued the benefits of data sharing, including maintaining the integrity of the research process by providing independent opportunities for verification, refutation, or refinement of original results and data; promoting new research and the development and testing of new theories; and encouraging appropriate use of empirical data in policy formulation and evaluation. The same report examined obstacles to data sharing, which include the criticism or competition that might be stimulated by data sharing; technical barriers that may impede the exchange of computer-readable data; lack of documentation of data sets; and the considerable costs of documentation, duplication, and transfer of data.

The exchange of research data and reagents is ideally governed by principles of collegiality and reciprocity: scientists often distribute reagents with the hope that the recipient will reciprocate in the future, and some give materials out freely with no stipulations attached. 12 Scientists who repeatedly or flagrantly deviate from the tradition of sharing become known to their peers and may suffer subtle forms of professional isolation. Such cases may be well known to senior research investigators, but they are not well documented.

Some scientists may share materials as part of a collaborative agreement in exchange for co-authorship on resulting publications. Some donors stipulate that the shared materials are not to be used for applications already being pursued by the donor's laboratory. Other stipulations include that the material not be passed on to third parties without prior authorization, that the material not be used for proprietary research, or that the donor receive prepublication copies of research publications derived from the material. In some instances, so-called materials transfer agreements are executed to specify the responsibilities of donor and recipient. As more academic research is being supported under proprietary agreements, researchers and institutions are experiencing the effects of these arrangements on research practices.

Governmental support for research studies may raise fundamental questions of ownership and rights of control, particularly when data are subsequently used in proprietary efforts, public policy decisions, or litigation. Some federal research agencies have adopted policies for data sharing to mitigate conflicts over issues of ownership and access (NIH, 1987; NSF, 1989b).

Many research investigators store primary data in the laboratories in which the data were initially derived, generally as electronic records or data sheets in laboratory notebooks. For most academic laboratories, local customary practice governs the storage (or discarding) of research data. Formal rules or guidelines concerning their disposition are rare.

Many laboratories customarily store primary data for a set period (often 3 to 5 years) after they are initially collected. Data that support publications are usually retained for a longer period than are those tangential to reported results. Some research laboratories serve as the proprietor of data and data books that are under the stewardship of the principal investigator. Others maintain that it is the responsibility of the individuals who collected the data to retain proprietorship, even if they leave the laboratory.

Concerns about misconduct in science have raised questions about the roles of research investigators and of institutions in maintaining and providing access to primary data. In some cases of alleged misconduct, the inability or unwillingness of an investigator to provide primary data or witnesses to support published reports sometimes has constituted a presumption that the experiments were not conducted as reported. 13 Furthermore, there is disagreement about the responsibilities of investigators to provide access to raw data, particularly when the reported results have been challenged by others. Many scientists believe that access should be restricted to peers and colleagues, usually following publication of research results, to reduce external demands on the time of the investigator. Others have suggested that raw data supporting research reports should be accessible to any critic or competitor, at any time, especially if the research is conducted with public funds. This topic, in particular, could benefit from further research and systematic discussion to clarify the rights and responsibilities of research investigators, institutions, and sponsors.

Institutional policies have been developed to guide data storage practices in some fields, often stimulated by desires to support the patenting of scientific results and to provide documentation for resolving disputes over patent claims. Laboratories concerned with patents usually have very strict rules concerning data storage and note keeping, often requiring that notes be recorded in an indelible form and be countersigned by an authorized person each day. A few universities have also considered the creation of central storage repositories for all primary data collected by their research investigators. Some government research institutions and industrial research centers maintain such repositories to safeguard the record of research developments for scientific, historical, proprietary, and national security interests.

In the academic environment, however, centralized research records raise complex problems of ownership, control, and access. Centralized data storage is costly in terms of money and space, and it presents logistical problems of cataloguing and retrieving data. There have been suggestions that some types of scientific data should be incorporated into centralized computerized data banks, a portion of which could be subject to periodic auditing or certification. 14 But much investigator-initiated research is not suitable for random data audits because of the exploratory nature of basic or discovery research. 15

Some scientific journals now require that full data for research papers be deposited in a centralized data bank before final publication. Policies and practices differ, but in some fields support is growing for compulsory deposit to enhance researchers' access to supporting data.

Issues Related to Advances in Information Technology

Advances in electronic and other information technologies have raised new questions about the customs and practices that influence the storage, ownership, and exchange of electronic data and software. A number of special issues, not addressed by the panel, are associated with computer modeling, simulation, and other approaches that are becoming more prevalent in the research environment. Computer technology can enhance research collaboration; it can also create new impediments to data sharing resulting from increased costs, the need for specialized equipment, or liabilities or uncertainties about responsibilities for faulty data, software, or computer-generated models.

Advances in computer technology may assist in maintaining and preserving accurate records of research data. Such records could help resolve questions about the timing or accuracy of specific research findings, especially when a principal investigator is not available or is uncooperative in responding to such questions. In principle, properly managed information technologies, utilizing advances in nonerasable optical disk systems, might reinforce openness in scientific research and make primary data more transparent to collaborators and research managers. For example, the so-called WORM (write once, read many) systems provide a high-density digital storage medium that supplies an ineradicable audit trail and historical record for all entered information (Haas, 1991).

Advances in information technologies could thus provide an important benefit to research institutions that wish to emphasize greater access to and storage of primary research data. But the development of centralized information systems in the academic research environment raises difficult issues of ownership, control, and principle that reflect the decentralized character of university governance. Such systems are also a source of additional research expense, often borne by individual investigators. Moreover, if centralized systems are perceived by scientists as an inappropriate or ineffective form of management or oversight of individual research groups, they simply may not work in an academic environment.

Communication and Publication

Scientists communicate research results by a variety of formal and informal means. In earlier times, new findings and interpretations were communicated by letter, personal meeting, and publication. Today, computer networks and facsimile machines have supplemented letters and telephones in facilitating rapid exchange of results. Scientific meetings routinely include poster sessions and press conferences as well as formal presentations. Although research publications continue to document research findings, the appearance of electronic publications and other information technologies heralds change. In addition, incidents of plagiarism, the increasing number of authors per article in selected fields, and the methods by which publications are assessed in determining appointments and promotions have all increased concerns about the traditions and practices that have guided communication and publication.

Journal publication, traditionally an important means of sharing information and perspectives among scientists, is also a principal means of establishing a record of achievement in science. Evaluation of the accomplishments of individual scientists often involves not only the numbers of articles that have resulted from a selected research effort, but also the particular journals in which the articles have appeared. Journal submission dates are often important in establishing priority and intellectual property claims.

Authorship of original research reports is an important indicator of accomplishment, priority, and prestige within the scientific community. Questions of authorship in science are intimately connected with issues of credit and responsibility. Authorship practices are guided by disciplinary traditions, customary practices within research groups, and professional and journal standards and policies. 16 There is general acceptance of the principle that each named author has made a significant intellectual contribution to the paper, even though there remains substantial disagreement over the types of contributions that are judged to be significant.

A general rule is that an author must have participated sufficiently in the work to take responsibility for its content and vouch for its validity. Some journals have adopted more specific guidelines, suggesting that credit for authorship be contingent on substantial participation in one or more of the following categories: (1) conception and design of the experiment, (2) execution of the experiment and collection and storage of the supporting data, (3) analysis and interpretation of the primary data, and (4) preparation and revision of the manuscript. The extent of participation in these four activities required for authorship varies across journals, disciplines, and research groups. 17

“Honorary,” “gift,” or other forms of noncontributing authorship are problems with several dimensions. 18 Honorary authors reap an inflated list of publications incommensurate with their scientific contributions (Zen, 1988). Some scientists have requested or been given authorship as a form of recognition of their status or influence rather than their intellectual contribution. Some research leaders have a custom of including their own names in any paper issuing from their laboratory, although this practice is increasingly discouraged. Some students or junior staff encourage such “gift authorship” because they feel that the inclusion of prestigious names on their papers increases the chance of publication in well-known journals. In some cases, noncontributing authors have been listed without their consent, or even without their being told. In response to these practices, some journals now require all named authors to sign the letter that accompanies submission of the original article, to ensure that no author is named without consent.

“Specialized” authorship is another issue that has received increasing attention. In these cases, a co-author may claim responsibility for a specialized portion of the paper and may not even see or be able to defend the paper as a whole. 19 “Specialized” authorship may also result from demands that co-authorship be given as a condition of sharing a unique research reagent or selected data that do not constitute a major contribution—demands that many scientists believe are inappropriate. “Specialized” authorship may be appropriate in cross-disciplinary collaborations, in which each participant has made an important contribution that deserves recognition. However, the risks associated with the inabilities of co-authors to vouch for the integrity of an entire paper are great; scientists may unwittingly become associated with a discredited publication.

Another problem of lesser importance, except to the scientists involved, is the order of authors listed on a paper. The meaning of author order varies among and within disciplines. For example, in physics the ordering of authors is frequently alphabetical, whereas in the social sciences and other fields, the ordering reflects a descending order of contribution to the described research. Another practice, common in biology, is to list the senior author last.

Appropriate recognition for the contributions of junior investigators, postdoctoral fellows, and graduate students is sometimes a source of discontent and unease in the contemporary research environment. Junior researchers have raised concerns about treatment of their contributions when research papers are prepared and submitted, particularly if they are attempting to secure promotions or independent research funding or if they have left the original project. In some cases, well-meaning senior scientists may grant junior colleagues undeserved authorship or placement as a means of enhancing the junior colleague's reputation. In others, significant contributions may not receive appropriate recognition.

Authorship practices are further complicated by large-scale projects, especially those that involve specialized contributions. Mission teams for space probes, oceanographic expeditions, and projects in high-energy physics, for example, all involve large numbers of senior scientists who depend on the long-term functioning of complex equipment. Some questions about communication and publication that arise from large science projects such as the Superconducting Super Collider include: Who decides when an experiment is ready to be published? How is the spokesperson for the experiment determined? Who determines who can give talks on the experiment? How should credit for technical or hardware contributions be acknowledged?

Apart from plagiarism, problems of authorship and credit allocation usually do not involve misconduct in science. Although some forms of “gift authorship,” in which a designated author made no identifiable contribution to a paper, may be viewed as instances of falsification, authorship disputes more commonly involve unresolved differences of judgment and style. Many research groups have found that the best method of resolving authorship questions is to agree on a designation of authors at the outset of the project. The negotiation and decision process provides initial recognition of each member's effort, and it may prevent misunderstandings that can arise during the course of the project when individuals may be in transition to new efforts or may become preoccupied with other matters.

Plagiarism. Plagiarism is using the ideas or words of another person without giving appropriate credit. Plagiarism includes the unacknowledged use of text and ideas from published work, as well as the misuse of privileged information obtained through confidential review of research proposals and manuscripts.

As described in Honor in Science, plagiarism can take many forms: at one extreme is the exact replication of another's writing without appropriate attribution (Sigma Xi, 1986). At the other is the more subtle “borrowing” of ideas, terms, or paraphrases, as described by Martin et al., “so that the result is a mosaic of other people's ideas and words, the writer's sole contribution being the cement to hold the pieces together.” 20 The importance of recognition for one's intellectual abilities in science demands high standards of accuracy and diligence in ensuring appropriate recognition for the work of others.

The misuse of privileged information may be less clear-cut because it does not involve published work. But the general principles of the importance of giving credit to the accomplishments of others are the same. The use of ideas or information obtained from peer review is not acceptable because the reviewer is in a privileged position. Some organizations, such as the American Chemical Society, have adopted policies to address these concerns (ACS, 1986).

Additional Concerns. Other problems related to authorship include overspecialization, overemphasis on short-term projects, and the organization of research communication around the “least publishable unit.” In a research system that rewards quantity at the expense of quality and favors speed over attention to detail (the effects of “publish or perish”), scientists who wait until their research data are complete before releasing them for publication may be at a disadvantage. Some institutions, such as Harvard Medical School, have responded to these problems by limiting the number of publications reviewed for promotion. Others have placed greater emphasis on major contributions as the basis for evaluating research productivity.

As gatekeepers of scientific journals, editors are expected to use good judgment and fairness in selecting papers for publication. Although editors cannot be held responsible for the errors or inaccuracies of papers that may appear in their journals, editors have obligations to consider criticism and evidence that might contradict the claims of an author and to facilitate publication of critical letters, errata, or retractions. 21 Some institutions, including the National Library of Medicine and professional societies that represent editors of scientific journals, are exploring the development of standards relevant to these obligations (Bailar et al., 1990).

Should questions be raised about the integrity of a published work, the editor may request an author's institution to address the matter. Editors often request written assurances that research reported conforms to all appropriate guidelines involving human or animal subjects, materials of human origin, or recombinant DNA.

In theory, editors set standards of authorship for their journals. In practice, scientists in the specialty do. Editors may specify the. terms of acknowledgment of contributors who fall short of authorship status, and make decisions regarding appropriate forms of disclosure of sources of bias or other potential conflicts of interest related to published articles. For example, the New England Journal of Medicine has established a category of prohibited contributions from authors engaged in for-profit ventures: the journal will not allow such persons to prepare review articles or editorial commentaries for publication. Editors can clarify and insist on the confidentiality of review and take appropriate actions against reviewers who violate it. Journals also may require or encourage their authors to deposit reagents and sequence and crystallographic data into appropriate databases or storage facilities. 22

Peer Review

Peer review is the process by which editors and journals seek to be advised by knowledgeable colleagues about the quality and suitability of a manuscript for publication in a journal. Peer review is also used by funding agencies to seek advice concerning the quality and promise of proposals for research support. The proliferation of research journals and the rewards associated with publication and with obtaining research grants have put substantial stress on the peer review system. Reviewers for journals or research agencies receive privileged information and must exert great care to avoid sharing such information with colleagues or allowing it to enter their own work prematurely.

Although the system of peer review is generally effective, it has been suggested that the quality of refereeing has declined, that self-interest has crept into the review process, and that some journal editors and reviewers exert inappropriate influence on the type of work they deem publishable. 23

Correction of Errors

At some level, all scientific reports, even those that mark profound advances, contain errors of fact or interpretation. In part, such errors reflect uncertainties intrinsic to the research process itself—a hypothesis is formulated, an experimental test is devised, and based on the interpretation of the results, the hypothesis is refined, revised, or discarded. Each step in this cycle is subject to error. For any given report, “correctness” is limited by the following:

The precision and accuracy of the measurements. These in turn depend on available technology, the use of proper statistical and analytical methods, and the skills of the investigator.

Generality of the experimental system and approach. Studies must often be carried out using “model systems.” In biology, for example, a given phenomenon is examined in only one or a few among millions of organismal species.

Experimental design—a product of the background and expertise of the investigator.

Interpretation and speculation regarding the significance of the findings—judgments that depend on expert knowledge, experience, and the insightfulness and boldness of the investigator.

Viewed in this context, errors are an integral aspect of progress in attaining scientific knowledge. They are consequences of the fact that scientists seek fundamental truths about natural processes of vast complexity. In the best experimental systems, it is common that relatively few variables have been identified and that even fewer can be controlled experimentally. Even when important variables are accounted for, the interpretation of the experimental results may be incorrect and may lead to an erroneous conclusion. Such conclusions are sometimes overturned by the original investigator or by others when new insights from another study prompt a reexamination of older reported data. In addition, however, erroneous information can also reach the scientific literature as a consequence of misconduct in science.

What becomes of these errors or incorrect interpretations? Much has been made of the concept that science is “self-correcting”—that errors, whether honest or products of misconduct, will be exposed in future experiments because scientific truth is founded on the principle that results must be verifiable and reproducible. This implies that errors will generally not long confound the direction of thinking or experimentation in actively pursued areas of research. Clearly, published experiments are not routinely replicated precisely by independent investigators. However, each experiment is based on conclusions from prior studies; repeated failure of the experiment eventually calls into question those conclusions and leads to reevaluation of the measurements, generality, design, and interpretation of the earlier work.

Thus publication of a scientific report provides an opportunity for the community at large to critique and build on the substance of the report, and serves as one stage at which errors and misinterpretations can be detected and corrected. Each new finding is considered by the community in light of what is already known about the system investigated, and disagreements with established measurements and interpretations must be justified. For example, a particular interpretation of an electrical measurement of a material may implicitly predict the results of an optical experiment. If the reported optical results are in disagreement with the electrical interpretation, then the latter is unlikely to be correct—even though the measurements themselves were carefully and correctly performed. It is also possible, however, that the contradictory results are themselves incorrect, and this possibility will also be evaluated by the scientists working in the field. It is by this process of examination and reexamination that science advances.

The research endeavor can therefore be viewed as a two-tiered process: first, hypotheses are formulated, tested, and modified; second, results and conclusions are reevaluated in the course of additional study. In fact, the two tiers are interrelated, and the goals and traditions of science mandate major responsibilities in both areas for individual investigators. Importantly, the principle of self-correction does not diminish the responsibilities of the investigator in either area. The investigator has a fundamental responsibility to ensure that the reported results can be replicated in his or her laboratory. The scientific community in general adheres strongly to this principle, but practical constraints exist as a result of the availability of specialized instrumentation, research materials, and expert personnel. Other forces, such as competition, commercial interest, funding trends and availability, or pressure to publish may also erode the role of replication as a mechanism for fostering integrity in the research process. The panel is unaware of any quantitative studies of this issue.

The process of reevaluating prior findings is closely related to the formulation and testing of hypotheses. 24 Indeed, within an individual laboratory, the formulation/testing phase and the reevaluation phase are ideally ongoing interactive processes. In that setting, the precise replication of a prior result commonly serves as a crucial control in attempts to extend the original findings. It is not unusual that experimental flaws or errors of interpretation are revealed as the scope of an investigation deepens and broadens.

If new findings or significant questions emerge in the course of a reevaluation that affect the claims of a published report, the investigator is obliged to make public a correction of the erroneous result or to indicate the nature of the questions. Occasionally, this takes the form of a formal published retraction, especially in situations in which a central claim is found to be fundamentally incorrect or irreproducible. More commonly, a somewhat different version of the original experiment, or a revised interpretation of the original result, is published as part of a subsequent report that extends in other ways the initial work. Some concerns have been raised that such “revisions” can sometimes be so subtle and obscure as to be unrecognizable. Such behavior is, at best, a questionable research practice. Clearly, each scientist has a responsibility to foster an environment that encourages and demands rigorous evaluation and reevaluation of every key finding.

Much greater complexity is encountered when an investigator in one research group is unable to confirm the published findings of another. In such situations, precise replication of the original result is commonly not attempted because of the lack of identical reagents, differences in experimental protocols, diverse experimental goals, or differences in personnel. Under these circumstances, attempts to obtain the published result may simply be dropped if the central claim of the original study is not the major focus of the new study. Alternatively, the inability to obtain the original finding may be documented in a paper by the second investigator as part of a challenge to the original claim. In any case, such questions about a published finding usually provoke the initial investigator to attempt to reconfirm the original result, or to pursue additional studies that support and extend the original findings.

In accordance with established principles of science, scientists have the responsibility to replicate and reconfirm their results as a normal part of the research process. The cycles of theoretical and methodological formulation, testing, and reevaluation, both within and between laboratories, produce an ongoing process of revision and refinement that corrects errors and strengthens the fabric of research.

Research Training and Mentorship

The panel defined a mentor as that person directly responsible for the professional development of a research trainee. 25 Professional development includes both technical training, such as instruction in the methods of scientific research (e.g., research design, instrument use, and selection of research questions and data), and socialization in basic research practices (e.g., authorship practices and sharing of research data).

Positive Aspects of Mentorship

The relationship of the mentor and research trainee is usually characterized by extraordinary mutual commitment and personal involvement. A mentor, as a research advisor, is generally expected to supervise the work of the trainee and ensure that the trainee's research is completed in a sound, honest, and timely manner. The ideal mentor challenges the trainee, spurs the trainee to higher scientific achievement, and helps socialize the trainee into the community of scientists by demonstrating and discussing methods and practices that are not well understood.

Research mentors thus have complex and diverse roles. Many individuals excel in providing guidance and instruction as well as personal support, and some mentors are resourceful in providing funds and securing professional opportunities for their trainees. The mentoring relationship may also combine elements of other relationships, such as parenting, coaching, and guildmastering. One mentor has written that his “research group is like an extended family or small tribe, dependent on one another, but led by the mentor, who acts as their consultant, critic, judge, advisor, and scientific father” (Cram, 1989, p. 1). Another mentor described as “orphaned graduate students” trainees who had lost their mentors to death, job changes, or in other ways (Sindermann, 1987). Many students come to respect and admire their mentors, who act as role models for their younger colleagues.

Difficulties Associated with Mentorship

However, the mentoring relationship does not always function properly or even satisfactorily. Almost no literature exists that evaluates which problems are idiosyncratic and which are systemic. However, it is clear that traditional practices in the area of mentorship and training are under stress. In some research fields, for example, concerns are being raised about how the increasing size and diverse composition of research groups affect the quality of the relationship between trainee and mentor. As the size of research laboratories expands, the quality of the training environment is at risk (CGS, 1990a).

Large laboratories may provide valuable instrumentation and access to unique research skills and resources as well as an opportunity to work in pioneering fields of science. But as only one contribution to the efforts of a large research team, a graduate student's work may become highly specialized, leading to a narrowing of experience and greater dependency on senior personnel; in a period when the availability of funding may limit research opportunities, laboratory heads may find it necessary to balance research decisions for the good of the team against the individual educational interests of each trainee. Moreover, the demands of obtaining sufficient resources to maintain a laboratory in the contemporary research environment often separate faculty from their trainees. When laboratory heads fail to participate in the everyday workings of the laboratory—even for the most beneficent of reasons, such as finding funds to support young investigators—their inattention may harm their trainees' education.

Although the size of a research group can influence the quality of mentorship, the more important issues are the level of supervision received by trainees, the degree of independence that is appropriate for the trainees' experience and interests, and the allocation of credit for achievements that are accomplished by groups composed of individuals with different status. Certain studies involving large groups of 40 to 100 or more are commonly carried out by collaborative or hierarchical arrangements under a single investigator. These factors may affect the ability of research mentors to transmit the methods and ethical principles according to which research should be conducted.

Problems also arise when faculty members are not directly rewarded for their graduate teaching or training skills. Although faculty may receive indirect rewards from the contributions of well-trained graduate students to their own research as well as the satisfaction of seeing their students excelling elsewhere, these rewards may not be sufficiently significant in tenure or promotion decisions. When institutional policies fail to recognize and reward the value of good teaching and mentorship, the pressures to maintain stable funding for research teams in a competitive environment can overwhelm the time allocated to teaching and mentorship by a single investigator.

The increasing duration of the training period in many research fields is another source of concern, particularly when it prolongs the dependent status of the junior investigator. The formal period of graduate and postdoctoral training varies considerably among fields of study. In 1988, the median time to the doctorate from the baccalaureate degree was 6.5 years (NRC, 1989). The disciplinary median varied: 5.5 years in chemistry; 5.9 years in engineering; 7.1 years in health sciences and in earth, atmospheric, and marine sciences; and 9.0 years in anthropology and sociology. 26

Students, research associates, and faculty are currently raising various questions about the rights and obligations of trainees. Sexist behavior by some research directors and other senior scientists is a particular source of concern. Another significant concern is that research trainees may be subject to exploitation because of their subordinate status in the research laboratory, particularly when their income, access to research resources, and future recommendations are dependent on the goodwill of the mentor. Foreign students and postdoctoral fellows may be especially vulnerable, since their immigration status often depends on continuation of a research relationship with the selected mentor.

Inequalities between mentor and trainee can exacerbate ordinary conflicts such as the distribution of credit or blame for research error (NAS, 1989). When conflicts arise, the expectations and assumptions that govern authorship practices, ownership of intellectual property, and the giving of references and recommendations are exposed for professional—and even legal—scrutiny (Nelkin, 1984; Weil and Snapper, 1989).

Making Mentorship Better

Ideally, mentors and trainees should select each other with an eye toward scientific merit, intellectual and personal compatibility, and other relevant factors. But this situation operates only under conditions of freely available information and unconstrained choice—conditions that usually do not exist in academic research groups. The trainee may choose to work with a faculty member based solely on criteria of patronage, perceived influence, or ability to provide financial support.

Good mentors may be well known and highly regarded within their research communities and institutions. Unfortunately, individuals who exploit the mentorship relationship may be less visible. Poor mentorship practices may be self-correcting over time, if students can detect and avoid research groups characterized by disturbing practices. However, individual trainees who experience abusive relationships with a mentor may discover only too late that the practices that constitute the abuse were well known but were not disclosed to new initiates.

It is common practice for a graduate student to be supervised not only by an individual mentor but also by a committee that represents the graduate department or research field of the student. However, departmental oversight is rare for the postdoctoral research fellow. In order to foster good mentorship practices for all research trainees, many groups and institutions have taken steps to clarify the nature of individual and institutional responsibilities in the mentor–trainee relationship. 27

  • FINDINGS AND CONCLUSIONS

The self-regulatory system that characterizes the research process has evolved from a diverse set of principles, traditions, standards, and customs transmitted from senior scientists, research directors, and department chairs to younger scientists by example, discussion, and informal education. The principles of honesty, collegiality, respect for others, and commitment to dissemination, critical evaluation, and rigorous training are characteristic of all the sciences. Methods and techniques of experimentation, styles of communicating findings, the relationship between theory and experimentation, and laboratory groupings for research and for training vary with the particular scientific disciplines. Within those disciplines, practices combine the general with the specific. Ideally, research practices reflect the values of the wider research community and also embody the practical skills needed to conduct scientific research.

Practicing scientists are guided by the principles of science and the standard practices of their particular scientific discipline as well as their personal moral principles. But conflicts are inherent among these principles. For example, loyalty to one's group of colleagues can be in conflict with the need to correct or report an abuse of scientific practice on the part of a member of that group.

Because scientists and the achievements of science have earned the respect of society at large, the behavior of scientists must accord not only with the expectations of scientific colleagues, but also with those of a larger community. As science becomes more closely linked to economic and political objectives, the processes by which scientists formulate and adhere to responsible research practices will be subject to increasing public scrutiny. This is one reason for scientists and research institutions to clarify and strengthen the methods by which they foster responsible research practices.

Accordingly, the panel emphasizes the following conclusions:

  • The panel believes that the existing self-regulatory system in science is sound. But modifications are necessary to foster integrity in a changing research environment, to handle cases of misconduct in science, and to discourage questionable research practices.
  • Individual scientists have a fundamental responsibility to ensure that their results are reproducible, that their research is reported thoroughly enough so that results are reproducible, and that significant errors are corrected when they are recognized. Editors of scientific journals share these last two responsibilities.
  • Research mentors, laboratory directors, department heads, and senior faculty are responsible for defining, explaining, exemplifying, and requiring adherence to the value systems of their institutions. The neglect of sound training in a mentor's laboratory will over time compromise the integrity of the research process.
  • Administrative officials within the research institution also bear responsibility for ensuring that good scientific practices are observed in units of appropriate jurisdiction and that balanced reward systems appropriately recognize research quality, integrity, teaching, and mentorship. Adherence to scientific principles and disciplinary standards is at the root of a vital and productive research environment.
  • At present, scientific principles are passed on to trainees primarily by example and discussion, including training in customary practices. Most research institutions do not have explicit programs of instruction and discussion to foster responsible research practices, but the communication of values and traditions is critical to fostering responsible research practices and detering misconduct in science.
  • Efforts to foster responsible research practices in areas such as data handling, communication and publication, and research training and mentorship deserve encouragement by the entire research community. Problems have also developed in these areas that require explicit attention and correction by scientists and their institutions. If not properly resolved, these problems may weaken the integrity of the research process.

1. See, for example, Kuyper (1991).

2. See, for example, the proposal by Pigman and Carmichael (1950).

3. See, for example, Holton (1988) and Ravetz (1971).

4. Several excellent books on experimental design and statistical methods are available. See, for example, Wilson (1952) and Beveridge (1957).

5. For a somewhat dated review of codes of ethics adopted by the scientific and engineering societies, see Chalk et al. (1981).

6. The discussion in this section is derived from Mark Frankel's background paper, “Professional Societies and Responsible Research Conduct,” included in Volume II of this report.

7. For a broader discussion on this point, see Zuckerman (1977).

8. For a full discussion of the roles of scientific societies in fostering responsible research practices, see the background paper prepared by Mark Frankel, “Professional Societies and Responsible Research Conduct,” in Volume II of this report.

9. Selected examples of academic research conduct policies and guidelines are included in Volume II of this report.

10. See, for example, Holton's response to the criticisms of Millikan in Chapter 12 of Thematic Origins of Scientific Thought (Holton, 1988). See also Holton (1978).

11. See, for example, responses to the Proceedings of the National Academy of Sciences action against Friedman: Hamilton (1990) and Abelson et al. (1990). See also the discussion in Bailar et al. (1990).

12. Much of the discussion in this section is derived from a background paper, “Reflections on the Current State of Data and Reagent Exchange Among Biomedical Researchers,” prepared by Robert Weinberg and included in Volume II of this report.

13. See, for example, Culliton (1990) and Bradshaw et al. (1990). For the impact of the inability to provide corroborating data or witnesses, also see Ross et al. (1989).

14. See, for example, Rennie (1989) and Cassidy and Shamoo (1989).

15. See, for example, the discussion on random data audits in Institute of Medicine (1989a), pp. 26-27.

16. For a full discussion of the practices and policies that govern authorship in the biological sciences, see Bailar et al. (1990).

17. Note that these general guidelines exclude the provision of reagents or facilities or the supervision of research as a criteria of authorship.

18. A full discussion of problematic practices in authorship is included in Bailar et al. (1990). A controversial review of the responsibilities of co-authors is presented by Stewart and Feder (1987).

19. In the past, scientific papers often included a special note by a named researcher, not a co-author of the paper, who described, for example, a particular substance or procedure in a footnote or appendix. This practice seems to.have been abandoned for reasons that are not well understood.

20. Martin et al. (1969), as cited in Sigma Xi (1986), p. 41.

21. Huth (1988) suggests a “notice of fraud or notice of suspected fraud” issued by the journal editor to call attention to the controversy (p. 38). Angell (1983) advocates closer coordination between institutions and editors when institutions have ascertained misconduct.

22. Such facilities include Cambridge Crystallographic Data Base, GenBank at Los Alamos National Laboratory, the American Type Culture Collection, and the Protein Data Bank at Brookhaven National Laboratory. Deposition is important for data that cannot be directly printed because of large volume.

23. For more complete discussions of peer review in the wider context, see, for example, Cole et al. (1977) and Chubin and Hackett (1990).

24. The strength of theories as sources of the formulation of scientific laws and predictive power varies among different fields of science. For example, theories derived from observations in the field of evolutionary biology lack a great deal of predictive power. The role of chance in mutation and natural selection is great, and the future directions that evolution may take are essentially impossible to predict. Theory has enormous power for clarifying understanding of how evolution has occurred and for making sense of detailed data, but its predictive power in this field is very limited. See, for example, Mayr (1982, 1988).

25. Much of the discussion on mentorship is derived from a background paper prepared for the panel by David Guston. A copy of the full paper, “Mentorship and the Research Training Experience,” is included in Volume II of this report.

26. Although the time to the doctorate is increasing, there is some evidence that the magnitude of the increase may be affected by the organization of the cohort chosen for study. In the humanities, the increased time to the doctorate is not as large if one chooses as an organizational base the year in which the baccalaureate was received by Ph.D. recipients, rather than the year in which the Ph.D. was completed; see Bowen et al. (1991).

27. Some universities have written guidelines for the supervision or mentorship of trainees as part of their institutional research policy guidelines (see, for example, the guidelines adopted by Harvard University and the University of Michigan that are included in Volume II of this report). Other groups or institutions have written “guidelines” (IOM, 1989a; NIH, 1990), “checklists” (CGS, 1990a), and statements of “areas of concern” and suggested “devices” (CGS, 1990c).

The guidelines often affirm the need for regular, personal interaction between the mentor and the trainee. They indicate that mentors may need to limit the size of their laboratories so that they are able to interact directly and frequently with all of their trainees. Although there are many ways to ensure responsible mentorship, methods that provide continuous feedback, whether through formal or informal mechanisms, are apt to be the most successful (CGS, 1990a). Departmental mentorship awards (comparable to teaching or research prizes) can recognize, encourage, and enhance the mentoring relationship. For other discussions on mentorship, see the paper by David Guston in Volume II of this report.

One group convened by the Institute of Medicine has suggested “that the university has a responsibility to ensure that the size of a research unit does not outstrip the mentor's ability to maintain adequate supervision” (IOM, 1989a, p. 85). Others have noted that although it may be desirable to limit the number of trainees assigned to a senior investigator, there is insufficient information at this time to suggest that numbers alone significantly affect the quality of research supervision (IOM, 1989a, p. 33).

  • Cite this Page National Academy of Sciences (US), National Academy of Engineering (US) and Institute of Medicine (US) Panel on Scientific Responsibility and the Conduct of Research. Responsible Science: Ensuring the Integrity of the Research Process: Volume I. Washington (DC): National Academies Press (US); 1992. 2, Scientific Principles and Research Practices.
  • PDF version of this title (1.2M)

In this Page

Recent activity.

  • Scientific Principles and Research Practices - Responsible Science Scientific Principles and Research Practices - Responsible Science

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

  • U.S. Department of Health & Human Services

National Institutes of Health (NIH) - Turning Discovery into Health

  • Virtual Tour
  • Staff Directory
  • En Español

You are here

Nih clinical research trials and you, guiding principles for ethical research.

Pursuing Potential Research Participants Protections

Female doctor talking to a senior couple at her desk.

“When people are invited to participate in research, there is a strong belief that it should be their choice based on their understanding of what the study is about, and what the risks and benefits of the study are,” said Dr. Christine Grady, chief of the NIH Clinical Center Department of Bioethics, to Clinical Center Radio in a podcast.

Clinical research advances the understanding of science and promotes human health. However, it is important to remember the individuals who volunteer to participate in research. There are precautions researchers can take – in the planning, implementation and follow-up of studies – to protect these participants in research. Ethical guidelines are established for clinical research to protect patient volunteers and to preserve the integrity of the science.

NIH Clinical Center researchers published seven main principles to guide the conduct of ethical research:

Social and clinical value

Scientific validity, fair subject selection, favorable risk-benefit ratio, independent review, informed consent.

  • Respect for potential and enrolled subjects

Every research study is designed to answer a specific question. The answer should be important enough to justify asking people to accept some risk or inconvenience for others. In other words, answers to the research question should contribute to scientific understanding of health or improve our ways of preventing, treating, or caring for people with a given disease to justify exposing participants to the risk and burden of research.

A study should be designed in a way that will get an understandable answer to the important research question. This includes considering whether the question asked is answerable, whether the research methods are valid and feasible, and whether the study is designed with accepted principles, clear methods, and reliable practices. Invalid research is unethical because it is a waste of resources and exposes people to risk for no purpose

The primary basis for recruiting participants should be the scientific goals of the study — not vulnerability, privilege, or other unrelated factors. Participants who accept the risks of research should be in a position to enjoy its benefits. Specific groups of participants  (for example, women or children) should not be excluded from the research opportunities without a good scientific reason or a particular susceptibility to risk.

Uncertainty about the degree of risks and benefits associated with a clinical research study is inherent. Research risks may be trivial or serious, transient or long-term. Risks can be physical, psychological, economic, or social. Everything should be done to minimize the risks and inconvenience to research participants to maximize the potential benefits, and to determine that the potential benefits are proportionate to, or outweigh, the risks.

To minimize potential conflicts of interest and make sure a study is ethically acceptable before it starts, an independent review panel should review the proposal and ask important questions, including: Are those conducting the trial sufficiently free of bias? Is the study doing all it can to protect research participants? Has the trial been ethically designed and is the risk–benefit ratio favorable? The panel also monitors a study while it is ongoing.

Potential participants should make their own decision about whether they want to participate or continue participating in research. This is done through a process of informed consent in which individuals (1) are accurately informed of the purpose, methods, risks, benefits, and alternatives to the research, (2) understand this information and how it relates to their own clinical situation or interests, and (3) make a voluntary decision about whether to participate.

Respect for potential and enrolled participants

Individuals should be treated with respect from the time they are approached for possible participation — even if they refuse enrollment in a study — throughout their participation and after their participation ends. This includes:

  • respecting their privacy and keeping their private information confidential
  • respecting their right to change their mind, to decide that the research does not match their interests, and to withdraw without a penalty
  • informing them of new information that might emerge in the course of research, which might change their assessment of the risks and benefits of participating
  • monitoring their welfare and, if they experience adverse reactions, unexpected effects, or changes in clinical status, ensuring appropriate treatment and, when necessary, removal from the study
  • informing them about what was learned from the research

More information on these seven guiding principles and on bioethics in general

This page last reviewed on March 16, 2016

Connect with Us

  • More Social Media from NIH
  • Privacy Policy

Buy Me a Coffee

Research Method

Home » Validity – Types, Examples and Guide

Validity – Types, Examples and Guide

Table of Contents

Validity

Definition:

Validity refers to the extent to which a concept, measure, or study accurately represents the intended meaning or reality it is intended to capture. It is a fundamental concept in research and assessment that assesses the soundness and appropriateness of the conclusions, inferences, or interpretations made based on the data or evidence collected.

Research Validity

Research validity refers to the degree to which a study accurately measures or reflects what it claims to measure. In other words, research validity concerns whether the conclusions drawn from a study are based on accurate, reliable and relevant data.

Validity is a concept used in logic and research methodology to assess the strength of an argument or the quality of a research study. It refers to the extent to which a conclusion or result is supported by evidence and reasoning.

How to Ensure Validity in Research

Ensuring validity in research involves several steps and considerations throughout the research process. Here are some key strategies to help maintain research validity:

Clearly Define Research Objectives and Questions

Start by clearly defining your research objectives and formulating specific research questions. This helps focus your study and ensures that you are addressing relevant and meaningful research topics.

Use appropriate research design

Select a research design that aligns with your research objectives and questions. Different types of studies, such as experimental, observational, qualitative, or quantitative, have specific strengths and limitations. Choose the design that best suits your research goals.

Use reliable and valid measurement instruments

If you are measuring variables or constructs, ensure that the measurement instruments you use are reliable and valid. This involves using established and well-tested tools or developing your own instruments through rigorous validation processes.

Ensure a representative sample

When selecting participants or subjects for your study, aim for a sample that is representative of the population you want to generalize to. Consider factors such as age, gender, socioeconomic status, and other relevant demographics to ensure your findings can be generalized appropriately.

Address potential confounding factors

Identify potential confounding variables or biases that could impact your results. Implement strategies such as randomization, matching, or statistical control to minimize the influence of confounding factors and increase internal validity.

Minimize measurement and response biases

Be aware of measurement biases and response biases that can occur during data collection. Use standardized protocols, clear instructions, and trained data collectors to minimize these biases. Employ techniques like blinding or double-blinding in experimental studies to reduce bias.

Conduct appropriate statistical analyses

Ensure that the statistical analyses you employ are appropriate for your research design and data type. Select statistical tests that are relevant to your research questions and use robust analytical techniques to draw accurate conclusions from your data.

Consider external validity

While it may not always be possible to achieve high external validity, be mindful of the generalizability of your findings. Clearly describe your sample and study context to help readers understand the scope and limitations of your research.

Peer review and replication

Submit your research for peer review by experts in your field. Peer review helps identify potential flaws, biases, or methodological issues that can impact validity. Additionally, encourage replication studies by other researchers to validate your findings and enhance the overall reliability of the research.

Transparent reporting

Clearly and transparently report your research methods, procedures, data collection, and analysis techniques. Provide sufficient details for others to evaluate the validity of your study and replicate your work if needed.

Types of Validity

There are several types of validity that researchers consider when designing and evaluating studies. Here are some common types of validity:

Internal Validity

Internal validity relates to the degree to which a study accurately identifies causal relationships between variables. It addresses whether the observed effects can be attributed to the manipulated independent variable rather than confounding factors. Threats to internal validity include selection bias, history effects, maturation of participants, and instrumentation issues.

External Validity

External validity concerns the generalizability of research findings to the broader population or real-world settings. It assesses the extent to which the results can be applied to other individuals, contexts, or timeframes. Factors that can limit external validity include sample characteristics, research settings, and the specific conditions under which the study was conducted.

Construct Validity

Construct validity examines whether a study adequately measures the intended theoretical constructs or concepts. It focuses on the alignment between the operational definitions used in the study and the underlying theoretical constructs. Construct validity can be threatened by issues such as poor measurement tools, inadequate operational definitions, or a lack of clarity in the conceptual framework.

Content Validity

Content validity refers to the degree to which a measurement instrument or test adequately covers the entire range of the construct being measured. It assesses whether the items or questions included in the measurement tool represent the full scope of the construct. Content validity is often evaluated through expert judgment, reviewing the relevance and representativeness of the items.

Criterion Validity

Criterion validity determines the extent to which a measure or test is related to an external criterion or standard. It assesses whether the results obtained from a measurement instrument align with other established measures or outcomes. Criterion validity can be divided into two subtypes: concurrent validity, which examines the relationship between the measure and the criterion at the same time, and predictive validity, which investigates the measure’s ability to predict future outcomes.

Face Validity

Face validity refers to the degree to which a measurement or test appears, on the surface, to measure what it intends to measure. It is a subjective assessment based on whether the items seem relevant and appropriate to the construct being measured. Face validity is often used as an initial evaluation before conducting more rigorous validity assessments.

Importance of Validity

Validity is crucial in research for several reasons:

  • Accurate Measurement: Validity ensures that the measurements or observations in a study accurately represent the intended constructs or variables. Without validity, researchers cannot be confident that their results truly reflect the phenomena they are studying. Validity allows researchers to draw accurate conclusions and make meaningful inferences based on their findings.
  • Credibility and Trustworthiness: Validity enhances the credibility and trustworthiness of research. When a study demonstrates high validity, it indicates that the researchers have taken appropriate measures to ensure the accuracy and integrity of their work. This strengthens the confidence of other researchers, peers, and the wider scientific community in the study’s results and conclusions.
  • Generalizability: Validity helps determine the extent to which research findings can be generalized beyond the specific sample and context of the study. By addressing external validity, researchers can assess whether their results can be applied to other populations, settings, or situations. This information is valuable for making informed decisions, implementing interventions, or developing policies based on research findings.
  • Sound Decision-Making: Validity supports informed decision-making in various fields, such as medicine, psychology, education, and social sciences. When validity is established, policymakers, practitioners, and professionals can rely on research findings to guide their actions and interventions. Validity ensures that decisions are based on accurate and trustworthy information, which can lead to better outcomes and more effective practices.
  • Avoiding Errors and Bias: Validity helps researchers identify and mitigate potential errors and biases in their studies. By addressing internal validity, researchers can minimize confounding factors and alternative explanations, ensuring that the observed effects are genuinely attributable to the manipulated variables. Validity assessments also highlight measurement errors or shortcomings, enabling researchers to improve their measurement tools and procedures.
  • Progress of Scientific Knowledge: Validity is essential for the advancement of scientific knowledge. Valid research contributes to the accumulation of reliable and valid evidence, which forms the foundation for building theories, developing models, and refining existing knowledge. Validity allows researchers to build upon previous findings, replicate studies, and establish a cumulative body of knowledge in various disciplines. Without validity, the scientific community would struggle to make meaningful progress and establish a solid understanding of the phenomena under investigation.
  • Ethical Considerations: Validity is closely linked to ethical considerations in research. Conducting valid research ensures that participants’ time, effort, and data are not wasted on flawed or invalid studies. It upholds the principle of respect for participants’ autonomy and promotes responsible research practices. Validity is also important when making claims or drawing conclusions that may have real-world implications, as misleading or invalid findings can have adverse effects on individuals, organizations, or society as a whole.

Examples of Validity

Here are some examples of validity in different contexts:

  • Example 1: All men are mortal. John is a man. Therefore, John is mortal. This argument is logically valid because the conclusion follows logically from the premises.
  • Example 2: If it is raining, then the ground is wet. The ground is wet. Therefore, it is raining. This argument is not logically valid because there could be other reasons for the ground being wet, such as watering the plants.
  • Example 1: In a study examining the relationship between caffeine consumption and alertness, the researchers use established measures of both variables, ensuring that they are accurately capturing the concepts they intend to measure. This demonstrates construct validity.
  • Example 2: A researcher develops a new questionnaire to measure anxiety levels. They administer the questionnaire to a group of participants and find that it correlates highly with other established anxiety measures. This indicates good construct validity for the new questionnaire.
  • Example 1: A study on the effects of a particular teaching method is conducted in a controlled laboratory setting. The findings of the study may lack external validity because the conditions in the lab may not accurately reflect real-world classroom settings.
  • Example 2: A research study on the effects of a new medication includes participants from diverse backgrounds and age groups, increasing the external validity of the findings to a broader population.
  • Example 1: In an experiment, a researcher manipulates the independent variable (e.g., a new drug) and controls for other variables to ensure that any observed effects on the dependent variable (e.g., symptom reduction) are indeed due to the manipulation. This establishes internal validity.
  • Example 2: A researcher conducts a study examining the relationship between exercise and mood by administering questionnaires to participants. However, the study lacks internal validity because it does not control for other potential factors that could influence mood, such as diet or stress levels.
  • Example 1: A teacher develops a new test to assess students’ knowledge of a particular subject. The items on the test appear to be relevant to the topic at hand and align with what one would expect to find on such a test. This suggests face validity, as the test appears to measure what it intends to measure.
  • Example 2: A company develops a new customer satisfaction survey. The questions included in the survey seem to address key aspects of the customer experience and capture the relevant information. This indicates face validity, as the survey seems appropriate for assessing customer satisfaction.
  • Example 1: A team of experts reviews a comprehensive curriculum for a high school biology course. They evaluate the curriculum to ensure that it covers all the essential topics and concepts necessary for students to gain a thorough understanding of biology. This demonstrates content validity, as the curriculum is representative of the domain it intends to cover.
  • Example 2: A researcher develops a questionnaire to assess career satisfaction. The questions in the questionnaire encompass various dimensions of job satisfaction, such as salary, work-life balance, and career growth. This indicates content validity, as the questionnaire adequately represents the different aspects of career satisfaction.
  • Example 1: A company wants to evaluate the effectiveness of a new employee selection test. They administer the test to a group of job applicants and later assess the job performance of those who were hired. If there is a strong correlation between the test scores and subsequent job performance, it suggests criterion validity, indicating that the test is predictive of job success.
  • Example 2: A researcher wants to determine if a new medical diagnostic tool accurately identifies a specific disease. They compare the results of the diagnostic tool with the gold standard diagnostic method and find a high level of agreement. This demonstrates criterion validity, indicating that the new tool is valid in accurately diagnosing the disease.

Where to Write About Validity in A Thesis

In a thesis, discussions related to validity are typically included in the methodology and results sections. Here are some specific places where you can address validity within your thesis:

Research Design and Methodology

In the methodology section, provide a clear and detailed description of the measures, instruments, or data collection methods used in your study. Discuss the steps taken to establish or assess the validity of these measures. Explain the rationale behind the selection of specific validity types relevant to your study, such as content validity, criterion validity, or construct validity. Discuss any modifications or adaptations made to existing measures and their potential impact on validity.

Measurement Procedures

In the methodology section, elaborate on the procedures implemented to ensure the validity of measurements. Describe how potential biases or confounding factors were addressed, controlled, or accounted for to enhance internal validity. Provide details on how you ensured that the measurement process accurately captures the intended constructs or variables of interest.

Data Collection

In the methodology section, discuss the steps taken to collect data and ensure data validity. Explain any measures implemented to minimize errors or biases during data collection, such as training of data collectors, standardized protocols, or quality control procedures. Address any potential limitations or threats to validity related to the data collection process.

Data Analysis and Results

In the results section, present the analysis and findings related to validity. Report any statistical tests, correlations, or other measures used to assess validity. Provide interpretations and explanations of the results obtained. Discuss the implications of the validity findings for the overall reliability and credibility of your study.

Limitations and Future Directions

In the discussion or conclusion section, reflect on the limitations of your study, including limitations related to validity. Acknowledge any potential threats or weaknesses to validity that you encountered during your research. Discuss how these limitations may have influenced the interpretation of your findings and suggest avenues for future research that could address these validity concerns.

Applications of Validity

Validity is applicable in various areas and contexts where research and measurement play a role. Here are some common applications of validity:

Psychological and Behavioral Research

Validity is crucial in psychology and behavioral research to ensure that measurement instruments accurately capture constructs such as personality traits, intelligence, attitudes, emotions, or psychological disorders. Validity assessments help researchers determine if their measures are truly measuring the intended psychological constructs and if the results can be generalized to broader populations or real-world settings.

Educational Assessment

Validity is essential in educational assessment to determine if tests, exams, or assessments accurately measure students’ knowledge, skills, or abilities. It ensures that the assessment aligns with the educational objectives and provides reliable information about student performance. Validity assessments help identify if the assessment is valid for all students, regardless of their demographic characteristics, language proficiency, or cultural background.

Program Evaluation

Validity plays a crucial role in program evaluation, where researchers assess the effectiveness and impact of interventions, policies, or programs. By establishing validity, evaluators can determine if the observed outcomes are genuinely attributable to the program being evaluated rather than extraneous factors. Validity assessments also help ensure that the evaluation findings are applicable to different populations, contexts, or timeframes.

Medical and Health Research

Validity is essential in medical and health research to ensure the accuracy and reliability of diagnostic tools, measurement instruments, and clinical assessments. Validity assessments help determine if a measurement accurately identifies the presence or absence of a medical condition, measures the effectiveness of a treatment, or predicts patient outcomes. Validity is crucial for establishing evidence-based medicine and informing medical decision-making.

Social Science Research

Validity is relevant in various social science disciplines, including sociology, anthropology, economics, and political science. Researchers use validity to ensure that their measures and methods accurately capture social phenomena, such as social attitudes, behaviors, social structures, or economic indicators. Validity assessments support the reliability and credibility of social science research findings.

Market Research and Surveys

Validity is important in market research and survey studies to ensure that the survey questions effectively measure consumer preferences, buying behaviors, or attitudes towards products or services. Validity assessments help researchers determine if the survey instrument is accurately capturing the desired information and if the results can be generalized to the target population.

Limitations of Validity

Here are some limitations of validity:

  • Construct Validity: Limitations of construct validity include the potential for measurement error, inadequate operational definitions of constructs, or the failure to capture all aspects of a complex construct.
  • Internal Validity: Limitations of internal validity may arise from confounding variables, selection bias, or the presence of extraneous factors that could influence the study outcomes, making it difficult to attribute causality accurately.
  • External Validity: Limitations of external validity can occur when the study sample does not represent the broader population, when the research setting differs significantly from real-world conditions, or when the study lacks ecological validity, i.e., the findings do not reflect real-world complexities.
  • Measurement Validity: Limitations of measurement validity can arise from measurement error, inadequately designed or flawed measurement scales, or limitations inherent in self-report measures, such as social desirability bias or recall bias.
  • Statistical Conclusion Validity: Limitations in statistical conclusion validity can occur due to sampling errors, inadequate sample sizes, or improper statistical analysis techniques, leading to incorrect conclusions or generalizations.
  • Temporal Validity: Limitations of temporal validity arise when the study results become outdated due to changes in the studied phenomena, interventions, or contextual factors.
  • Researcher Bias: Researcher bias can affect the validity of a study. Biases can emerge through the researcher’s subjective interpretation, influence of personal beliefs, or preconceived notions, leading to unintentional distortion of findings or failure to consider alternative explanations.
  • Ethical Validity: Limitations can arise if the study design or methods involve ethical concerns, such as the use of deceptive practices, inadequate informed consent, or potential harm to participants.

Also see  Reliability Vs Validity

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Internal_Consistency_Reliability

Internal Consistency Reliability – Methods...

Internal Validity

Internal Validity – Threats, Examples and Guide

Split-Half Reliability

Split-Half Reliability – Methods, Examples and...

Alternate Forms Reliability

Alternate Forms Reliability – Methods, Examples...

Reliability

Reliability – Types, Examples and Guide

Test-Retest Reliability

Test-Retest Reliability – Methods, Formula and...

  • Increase Font Size

Logo for Digital Editions

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

2 Chapter 2: Principles of Research

Principles of research, 2.1  basic concepts.

Before we address where research questions in psychology come from—and what makes them more or less interesting—it is important to understand the kinds of questions that researchers in psychology typically ask. This requires a quick introduction to several basic concepts, many of which we will return to in more detail later in the book.

Research questions in psychology are about variables. A variable is a quantity or quality that varies across people or situations. For example, the height of the students in a psychology class is a variable because it varies from student to student. The sex of the students is also a variable as long as there are both male and female students in the class. A quantitative variable is a quantity, such as height, that is typically measured by assigning a number to each individual. Other examples of quantitative variables include people’s level of talkativeness, how depressed they are, and the number of siblings they have. A categorical variable is a quality, such as sex, and is typically measured by assigning a category label to each individual. Other examples include people’s nationality, their occupation, and whether they are receiving psychotherapy.

“Lots of Candy Could Lead to Violence”

Although researchers in psychology know that  correlation does not imply causation , many journalists do not. Many headlines suggest that a causal relationship has been demonstrated, when a careful reading of the articles shows that it has not because of the directionality and third-variable problems.

One article is about a study showing that children who ate candy every day were more likely than other children to be arrested for a violent offense later in life. But could candy really “lead to” violence, as the headline suggests? What alternative explanations can you think of for this statistical relationship? How could the headline be rewritten so that it is not misleading?

As we will see later in the book, there are various ways that researchers address the directionality and third-variable problems. The most effective, however, is to conduct an experiment. An experiment is a study in which the researcher manipulates the independent variable. For example, instead of simply measuring how much people exercise, a researcher could bring people into a laboratory and randomly assign half of them to run on a treadmill for 15 minutes and the rest to sit on a couch for 15 minutes. Although this seems like a minor addition to the research design, it is extremely important. Now if the exercisers end up in more positive moods than those who did not exercise, it cannot be because their moods affected how much they exercised (because it was the researcher who determined how much they exercised). Likewise, it cannot be because some third variable (e.g., physical health) affected both how much they exercised and what mood they were in (because, again, it was the researcher who determined how much they exercised). Thus experiments eliminate the directionality and third-variable problems and allow researchers to draw firm conclusions about causal relationships.

2.2  Generating Good Research Questions

Good research must begin with a good research question. Yet coming up with good research questions is something that novice researchers often find difficult and stressful. One reason is that this is a creative process that can appear mysterious—even magical—with experienced researchers seeming to pull interesting research questions out of thin air. However, psychological research on creativity has shown that it is neither as mysterious nor as magical as it appears. It is largely the product of ordinary thinking strategies and persistence (Weisberg, 1993). This section covers some fairly simple strategies for finding general research ideas, turning those ideas into empirically testable research questions, and finally evaluating those questions in terms of how interesting they are and how feasible they would be to answer.

Finding Inspiration

Research questions often begin as more general research ideas—usually focusing on some behaviour or psychological characteristic: talkativeness, memory for touches, depression, bungee jumping, and so on. Before looking at how to turn such ideas into empirically testable research questions, it is worth looking at where such ideas come from in the first place. Three of the most common sources of inspiration are informal observations, practical problems, and previous research.

Informal observations include direct observations of our own and others’ behaviour as well as secondhand observations from nonscientific sources such as newspapers, books, and so on. For example, you might notice that you always seem to be in the slowest moving line at the grocery store. Could it be that most people think the same thing? Or you might read in the local newspaper about people donating money and food to a local family whose house has burned down and begin to wonder about who makes such donations and why. Some of the most famous research in psychology has been inspired by informal observations. Stanley Milgram’s famous research on obedience, for example, was inspired in part by journalistic reports of the trials of accused Nazi war criminals—many of whom claimed that they were only obeying orders. This led him to wonder about the extent to which ordinary people will commit immoral acts simply because they are ordered to do so by an authority figure (Milgram, 1963).

Practical problems can also inspire research ideas, leading directly to applied research in such domains as law, health, education, and sports. Can human figure drawings help children remember details about being physically or sexually abused? How effective is psychotherapy for depression compared to drug therapy? To what extent do cell phones impair people’s driving ability? How can we teach children to read more efficiently? What is the best mental preparation for running a marathon?

Probably the most common inspiration for new research ideas, however, is previous research. Recall that science is a kind of large-scale collaboration in which many different researchers read and evaluate each other’s work and conduct new studies to build on it. Of course, experienced researchers are familiar with previous research in their area of expertise and probably have a long list of ideas. This suggests that novice researchers can find inspiration by consulting with a more experienced researcher (e.g., students can consult a faculty member). But they can also find inspiration by picking up a copy of almost any professional journal and reading the titles and abstracts. In one typical issue of Psychological Science, for example, you can find articles on the perception of shapes, anti-Semitism, police lineups, the meaning of death, second-language learning, people who seek negative emotional experiences, and many other topics. If you can narrow your interests down to a particular topic (e.g., memory) or domain (e.g., health care), you can also look through more specific journals, such as Memory Cognition or Health Psychology.

Generating Empirically Testable Research Questions

Once you have a research idea, you need to use it to generate one or more empirically testable research questions, that is, questions expressed in terms of a single variable or relationship between variables. One way to do this is to look closely at the discussion section in a recent research article on the topic. This is the last major section of the article, in which the researchers summarize their results, interpret them in the context of past research, and suggest directions for future research. These suggestions often take the form of specific research questions, which you can then try to answer with additional research. This can be a good strategy because it is likely that the suggested questions have already been identified as interesting and important by experienced researchers.

But you may also want to generate your own research questions. How can you do this? First, if you have a particular behaviour or psychological characteristic in mind, you can simply conceptualize it as a variable and ask how frequent or intense it is. How many words on average do people speak per day? How accurate are children’s memories of being touched? What percentage of people have sought professional help for depression? If the question has never been studied scientifically—which is something that you will learn in your literature review—then it might be interesting and worth pursuing.

If scientific research has already answered the question of how frequent or intense the behaviour or characteristic is, then you should consider turning it into a question about a statistical relationship between that behaviour or characteristic and some other variable. One way to do this is to ask yourself the following series of more general questions and write down all the answers you can think of.

·         What are some possible causes of the behaviour or characteristic?

·         What are some possible effects of the behaviour or characteristic?

·         What types of people might exhibit more or less of the behaviour or characteristic?

·         What types of situations might elicit more or less of the behaviour or characteristic?

In general, each answer you write down can be conceptualized as a second variable, suggesting a question about a statistical relationship. If you were interested in talkativeness, for example, it might occur to you that a possible cause of this psychological characteristic is family size. Is there a statistical relationship between family size and talkativeness? Or it might occur to you that people seem to be more talkative in same-sex groups than mixed-sex groups. Is there a difference in the average level of talkativeness of people in same-sex groups and people in mixed-sex groups? This approach should allow you to generate many different empirically testable questions about almost any behaviour or psychological characteristic.

If through this process you generate a question that has never been studied scientifically—which again is something that you will learn in your literature review—then it might be interesting and worth pursuing. But what if you find that it has been studied scientifically? Although novice researchers often want to give up and move on to a new question at this point, this is not necessarily a good strategy. For one thing, the fact that the question has been studied scientifically and the research published suggests that it is of interest to the scientific community. For another, the question can almost certainly be refined so that its answer will still contribute something new to the research literature. Again, asking yourself a series of more general questions about the statistical relationship is a good strategy.

·         Are there other ways to operationally define the variables?

·         Are there types of people for whom the statistical relationship might be stronger or weaker?

·         Are there situations in which the statistical relationship might be stronger or weaker—including situations with practical importance?

For example, research has shown that women and men speak about the same number of words per day—but this was when talkativeness was measured in terms of the number of words spoken per day among college students in the United States and Mexico. We can still ask whether other ways of measuring talkativeness—perhaps the number of different people spoken to each day—produce the same result. Or we can ask whether studying elderly people or people from other cultures produces the same result. Again, this approach should help you generate many different research questions about almost any statistical relationship.

2.3  Evaluating Research Questions

Researchers usually generate many more research questions than they ever attempt to answer. This means they must have some way of evaluating the research questions they generate so that they can choose which ones to pursue. In this section, we consider two criteria for evaluating research questions: the interestingness of the question and the feasibility of answering it.

Interestingness

How often do people tie their shoes? Do people feel pain when you punch them in the jaw? Are women more likely to wear makeup than men? Do people prefer vanilla or chocolate ice cream? Although it would be a fairly simple matter to design a study and collect data to answer these questions, you probably would not want to because they are not interesting. We are not talking here about whether a research question is interesting to us personally but whether it is interesting to people more generally and, especially, to the scientific community. But what makes a research question interesting in this sense? Here we look at three factors that affect the interestingness of a research question: the answer is in doubt, the answer fills a gap in the research literature, and the answer has important practical implications.

First, a research question is interesting to the extent that its answer is in doubt. Obviously, questions that have been answered by scientific research are no longer interesting as the subject of new empirical research. But the fact that a question has not been answered by scientific research does not necessarily make it interesting. There has to be some reasonable chance that the answer to the question will be something that we did not already know. But how can you assess this before actually collecting data? One approach is to try to think of reasons to expect different answers to the question—especially ones that seem to conflict with common sense. If you can think of reasons to expect at least two different answers, then the question might be interesting. If you can think of reasons to expect only one answer, then it probably is not. The question of whether women are more talkative than men is interesting because there are reasons to expect both answers. The existence of the stereotype itself suggests the answer could be yes, but the fact that women’s and men’s verbal abilities are fairly similar suggests the answer could be no. The question of whether people feel pain when you punch them in the jaw is not interesting because there is absolutely no reason to think that the answer could be anything other than a resounding yes.

A second important factor to consider when deciding if a research question is interesting is whether answering it will fill a gap in the research literature. Again, this means in part that the question has not already been answered by scientific research. But it also means that the question is in some sense a natural one for people who are familiar with the research literature. For example, the question of whether human figure drawings can help children recall touch information would be likely to occur to anyone who was familiar with research on the unreliability of eyewitness memory (especially in children) and the ineffectiveness of some alternative interviewing techniques.

A final factor to consider when deciding whether a research question is interesting is whether its answer has important practical implications. Again, the question of whether human figure drawings help children recall information about being touched has important implications for how children are interviewed in physical and sexual abuse cases. The question of whether cell phone use impairs driving is interesting because it is relevant to the personal safety of everyone who travels by car and to the debate over whether cell phone use should be restricted by law.

Feasibility

A second important criterion for evaluating research questions is the feasibility of successfully answering them. There are many factors that affect feasibility, including time, money, equipment and materials, technical knowledge and skill, and access to research participants. Clearly, researchers need to take these factors into account so that they do not waste time and effort pursuing research that they cannot complete successfully.

Looking through a sample of professional journals in psychology will reveal many studies that are complicated and difficult to carry out. These include longitudinal designs in which participants are tracked over many years, neuroimaging studies in which participants’ brain activity is measured while they carry out various mental tasks, and complex non-experimental studies involving several variables and complicated statistical analyses. Keep in mind, though, that such research tends to be carried out by teams of highly trained researchers whose work is often supported in part by government and private grants. Keep in mind also that research does not have to be complicated or difficult to produce interesting and important results. Looking through a sample of professional journals will also reveal studies that are relatively simple and easy to carry out—perhaps involving a convenience sample of college students and a paper-and-pencil task.

A final point here is that it is generally good practice to use methods that have already been used successfully by other researchers. For example, if you want to manipulate people’s moods to make some of them happy, it would be a good idea to use one of the many approaches that have been used successfully by other researchers (e.g., paying them a compliment). This is good not only for the sake of feasibility—the approach is “tried and true”—but also because it provides greater continuity with previous research. This makes it easier to compare your results with those of other researchers and to understand the implications of their research for yours, and vice versa.

Key Takeaways

·         Research ideas can come from a variety of sources, including informal observations, practical problems, and previous research.

·         Research questions expressed in terms of variables and relationships between variables can be suggested by other researchers or generated by asking a series of more general questions about the behaviour or psychological characteristic of interest.

·         It is important to evaluate how interesting a research question is before designing a study and collecting data to answer it. Factors that affect interestingness are the extent to which the answer is in doubt, whether it fills a gap in the research literature, and whether it has important practical implications.

·         It is also important to evaluate how feasible a research question will be to answer. Factors that affect feasibility include time, money, technical knowledge and skill, and access to special equipment and research participants.

References from Chapter 2

Milgram, S. (1963). Behavioral study of obedience. Journal of Abnormal and Social Psychology, 67, 371–378.

Stanovich, K. E. (2010). How to think straight about psychology (9th ed.). Boston, MA: Allyn Bacon.

Weisberg, R. W. (1993). Creativity: Beyond the myth of genius. New York, NY: Freeman.

Research Methods in Psychology & Neuroscience Copyright © by Dalhousie University Introduction to Psychology and Neuroscience Team. All Rights Reserved.

Share This Book

Characteristics of research

Research scientist

  • Empirical - based on observations and experimentation
  • Systematic - follows orderly and sequential procedure.
  • Controlled - all variables except those that are tested/experimented upon are kept constant.
  • Employs hypothesis - guides the investigation process
  • Analytical - There is critical analysis of all data used so that there is no error in their interpretation
  • Objective, Unbiased, & Logical - all findings are logically based on empirical.
  • Employs quantitative or statistical methods - data are transformed into numerical measures and are treated statistically.

See Also [ edit | edit source ]

  • Thinking Scientifically
  • Writing discipline specific research papers
  • Wikipedia: Research
  • Wikibooks: Research Methods

Bibliography [ edit | edit source ]

  • Feigenbaum, Edward A.; McCorduck, Pamela (1983). The fifth generation: Artificial intelligence and Japan's computer challenge to the world . ISBN  978-0-201-11519-2 .  
  • Kendal, Simon; Creen, Malcolm (2006-10-04). An Introduction to Knowledge Engineering . ISBN  978-1-84628-475-5 .  
  • Russell, Stuart Jonathan; Norvig, Peter (1995). Artificial Intelligence: A Modern Approach . ISBN  0-13-103805-2 .  

research is based on valid procedures and principles

Navigation menu

Book cover

A Roadmap to Successful Scientific Publishing pp 27–34 Cite as

Understanding Research Ethics

  • Sarah Cuschieri 2  
  • First Online: 22 April 2022

429 Accesses

As a researcher, whatever your career stage, you need to understand and practice good research ethics. Moral and ethical principles are requisite in research to ensure no deception or harm to participants, scientific community, and society occurs. Failure to follow such principles leads to research misconduct, in which case the researcher faces repercussions ranging from withdrawal of an article from publication to potential job loss. This chapter describes the various types of research misconduct that you should be aware of, i.e., data fabrication and falsification, plagiarism, research bias, data integrity, researcher and funder conflicts of interest. A sound comprehension of research ethics will take you a long way in your career.

  • Research ethics
  • Scientific bias
  • Conflict of interest

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Author information

Authors and affiliations.

Department of Anatomy, Faculty of Medicine and Surgery, University of Malta, Msida, Malta

Sarah Cuschieri

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Cite this chapter.

Cuschieri, S. (2022). Understanding Research Ethics. In: A Roadmap to Successful Scientific Publishing. Springer, Cham. https://doi.org/10.1007/978-3-030-99295-8_2

Download citation

DOI : https://doi.org/10.1007/978-3-030-99295-8_2

Published : 22 April 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-99294-1

Online ISBN : 978-3-030-99295-8

eBook Packages : Biomedical and Life Sciences Biomedical and Life Sciences (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

Research Methods | Definitions, Types, Examples

Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design . When planning your methods, there are two key decisions you will make.

First, decide how you will collect data . Your methods depend on what type of data you need to answer your research question :

  • Qualitative vs. quantitative : Will your data take the form of words or numbers?
  • Primary vs. secondary : Will you collect original data yourself, or will you use data that has already been collected by someone else?
  • Descriptive vs. experimental : Will you take measurements of something as it is, or will you perform an experiment?

Second, decide how you will analyze the data .

  • For quantitative data, you can use statistical analysis methods to test relationships between variables.
  • For qualitative data, you can use methods such as thematic analysis to interpret patterns and meanings in the data.

Table of contents

Methods for collecting data, examples of data collection methods, methods for analyzing data, examples of data analysis methods, other interesting articles, frequently asked questions about research methods.

Data is the information that you collect for the purposes of answering your research question . The type of data you need depends on the aims of your research.

Qualitative vs. quantitative data

Your choice of qualitative or quantitative data collection depends on the type of knowledge you want to develop.

For questions about ideas, experiences and meanings, or to study something that can’t be described numerically, collect qualitative data .

If you want to develop a more mechanistic understanding of a topic, or your research involves hypothesis testing , collect quantitative data .

You can also take a mixed methods approach , where you use both qualitative and quantitative research methods.

Primary vs. secondary research

Primary research is any original data that you collect yourself for the purposes of answering your research question (e.g. through surveys , observations and experiments ). Secondary research is data that has already been collected by other researchers (e.g. in a government census or previous scientific studies).

If you are exploring a novel research question, you’ll probably need to collect primary data . But if you want to synthesize existing knowledge, analyze historical trends, or identify patterns on a large scale, secondary data might be a better choice.

Descriptive vs. experimental data

In descriptive research , you collect data about your study subject without intervening. The validity of your research will depend on your sampling method .

In experimental research , you systematically intervene in a process and measure the outcome. The validity of your research will depend on your experimental design .

To conduct an experiment, you need to be able to vary your independent variable , precisely measure your dependent variable, and control for confounding variables . If it’s practically and ethically possible, this method is the best choice for answering questions about cause and effect.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

research is based on valid procedures and principles

Your data analysis methods will depend on the type of data you collect and how you prepare it for analysis.

Data can often be analyzed both quantitatively and qualitatively. For example, survey responses could be analyzed qualitatively by studying the meanings of responses or quantitatively by studying the frequencies of responses.

Qualitative analysis methods

Qualitative analysis is used to understand words, ideas, and experiences. You can use it to interpret data that was collected:

  • From open-ended surveys and interviews , literature reviews , case studies , ethnographies , and other sources that use text rather than numbers.
  • Using non-probability sampling methods .

Qualitative analysis tends to be quite flexible and relies on the researcher’s judgement, so you have to reflect carefully on your choices and assumptions and be careful to avoid research bias .

Quantitative analysis methods

Quantitative analysis uses numbers and statistics to understand frequencies, averages and correlations (in descriptive studies) or cause-and-effect relationships (in experiments).

You can use quantitative analysis to interpret data that was collected either:

  • During an experiment .
  • Using probability sampling methods .

Because the data is collected and analyzed in a statistically valid way, the results of quantitative analysis can be easily standardized and shared among researchers.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Chi square test of independence
  • Statistical power
  • Descriptive statistics
  • Degrees of freedom
  • Pearson correlation
  • Null hypothesis
  • Double-blind study
  • Case-control study
  • Research ethics
  • Data collection
  • Hypothesis testing
  • Structured interviews

Research bias

  • Hawthorne effect
  • Unconscious bias
  • Recall bias
  • Halo effect
  • Self-serving bias
  • Information bias

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

In statistics, sampling allows you to test a hypothesis about the characteristics of a population.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts and meanings, use qualitative methods .
  • If you want to analyze a large amount of readily-available data, use secondary data. If you want data specific to your purposes with control over how it is generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.

Methods are the specific tools and procedures you use to collect and analyze data (for example, experiments, surveys , and statistical tests ).

In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .

In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.

Is this article helpful?

Other students also liked, writing strong research questions | criteria & examples.

  • What Is a Research Design | Types, Guide & Examples
  • Data Collection | Definition, Methods & Examples

More interesting articles

  • Between-Subjects Design | Examples, Pros, & Cons
  • Cluster Sampling | A Simple Step-by-Step Guide with Examples
  • Confounding Variables | Definition, Examples & Controls
  • Construct Validity | Definition, Types, & Examples
  • Content Analysis | Guide, Methods & Examples
  • Control Groups and Treatment Groups | Uses & Examples
  • Control Variables | What Are They & Why Do They Matter?
  • Correlation vs. Causation | Difference, Designs & Examples
  • Correlational Research | When & How to Use
  • Critical Discourse Analysis | Definition, Guide & Examples
  • Cross-Sectional Study | Definition, Uses & Examples
  • Descriptive Research | Definition, Types, Methods & Examples
  • Ethical Considerations in Research | Types & Examples
  • Explanatory and Response Variables | Definitions & Examples
  • Explanatory Research | Definition, Guide, & Examples
  • Exploratory Research | Definition, Guide, & Examples
  • External Validity | Definition, Types, Threats & Examples
  • Extraneous Variables | Examples, Types & Controls
  • Guide to Experimental Design | Overview, Steps, & Examples
  • How Do You Incorporate an Interview into a Dissertation? | Tips
  • How to Do Thematic Analysis | Step-by-Step Guide & Examples
  • How to Write a Literature Review | Guide, Examples, & Templates
  • How to Write a Strong Hypothesis | Steps & Examples
  • Inclusion and Exclusion Criteria | Examples & Definition
  • Independent vs. Dependent Variables | Definition & Examples
  • Inductive Reasoning | Types, Examples, Explanation
  • Inductive vs. Deductive Research Approach | Steps & Examples
  • Internal Validity in Research | Definition, Threats, & Examples
  • Internal vs. External Validity | Understanding Differences & Threats
  • Longitudinal Study | Definition, Approaches & Examples
  • Mediator vs. Moderator Variables | Differences & Examples
  • Mixed Methods Research | Definition, Guide & Examples
  • Multistage Sampling | Introductory Guide & Examples
  • Naturalistic Observation | Definition, Guide & Examples
  • Operationalization | A Guide with Examples, Pros & Cons
  • Population vs. Sample | Definitions, Differences & Examples
  • Primary Research | Definition, Types, & Examples
  • Qualitative vs. Quantitative Research | Differences, Examples & Methods
  • Quasi-Experimental Design | Definition, Types & Examples
  • Questionnaire Design | Methods, Question Types & Examples
  • Random Assignment in Experiments | Introduction & Examples
  • Random vs. Systematic Error | Definition & Examples
  • Reliability vs. Validity in Research | Difference, Types and Examples
  • Reproducibility vs Replicability | Difference & Examples
  • Reproducibility vs. Replicability | Difference & Examples
  • Sampling Methods | Types, Techniques & Examples
  • Semi-Structured Interview | Definition, Guide & Examples
  • Simple Random Sampling | Definition, Steps & Examples
  • Single, Double, & Triple Blind Study | Definition & Examples
  • Stratified Sampling | Definition, Guide & Examples
  • Structured Interview | Definition, Guide & Examples
  • Survey Research | Definition, Examples & Methods
  • Systematic Review | Definition, Example, & Guide
  • Systematic Sampling | A Step-by-Step Guide with Examples
  • Textual Analysis | Guide, 3 Approaches & Examples
  • The 4 Types of Reliability in Research | Definitions & Examples
  • The 4 Types of Validity in Research | Definitions & Examples
  • Transcribing an Interview | 5 Steps & Transcription Software
  • Triangulation in Research | Guide, Types, Examples
  • Types of Interviews in Research | Guide & Examples
  • Types of Research Designs Compared | Guide & Examples
  • Types of Variables in Research & Statistics | Examples
  • Unstructured Interview | Definition, Guide & Examples
  • What Is a Case Study? | Definition, Examples & Methods
  • What Is a Case-Control Study? | Definition & Examples
  • What Is a Cohort Study? | Definition & Examples
  • What Is a Conceptual Framework? | Tips & Examples
  • What Is a Controlled Experiment? | Definitions & Examples
  • What Is a Double-Barreled Question?
  • What Is a Focus Group? | Step-by-Step Guide & Examples
  • What Is a Likert Scale? | Guide & Examples
  • What Is a Prospective Cohort Study? | Definition & Examples
  • What Is a Retrospective Cohort Study? | Definition & Examples
  • What Is Action Research? | Definition & Examples
  • What Is an Observational Study? | Guide & Examples
  • What Is Concurrent Validity? | Definition & Examples
  • What Is Content Validity? | Definition & Examples
  • What Is Convenience Sampling? | Definition & Examples
  • What Is Convergent Validity? | Definition & Examples
  • What Is Criterion Validity? | Definition & Examples
  • What Is Data Cleansing? | Definition, Guide & Examples
  • What Is Deductive Reasoning? | Explanation & Examples
  • What Is Discriminant Validity? | Definition & Example
  • What Is Ecological Validity? | Definition & Examples
  • What Is Ethnography? | Definition, Guide & Examples
  • What Is Face Validity? | Guide, Definition & Examples
  • What Is Non-Probability Sampling? | Types & Examples
  • What Is Participant Observation? | Definition & Examples
  • What Is Peer Review? | Types & Examples
  • What Is Predictive Validity? | Examples & Definition
  • What Is Probability Sampling? | Types & Examples
  • What Is Purposive Sampling? | Definition & Examples
  • What Is Qualitative Observation? | Definition & Examples
  • What Is Qualitative Research? | Methods & Examples
  • What Is Quantitative Observation? | Definition & Examples
  • What Is Quantitative Research? | Definition, Uses & Methods

Unlimited Academic AI-Proofreading

✔ Document error-free in 5minutes ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

Good manufacturing practice

This content applies to human and veterinary medicines.

Any manufacturer of medicines intended for the EU market, no matter where in the world it is located, must comply with GMP.

GMP requires that medicines:

  • are of consistent high quality;
  • are appropriate for their intended use;
  • meet the requirements of the marketing authorisation or clinical trial authorisation.

Also on this topic

  • EudraGMDP database
  • Guidance on good manufacturing practice and good distribution practice: Questions and answers

GMP/GDP Inspectors Working Group

  • Mutual recognition agreements (MRA)
  • International collaboration on GMP inspections
  • Joint Audit Programme

Regulatory expectations and GMP certificates following the-COVID-19 public health emergency

EMA, the European Commission and Heads of Medicines Agencies (HMA) have phased out the extraordinary regulatory flexibilities for medicines put in place during the COVID-19 pandemic to help address regulatory and supply challenges arising from the pandemic .

This follows the end of the COVID-19 public health emergency declared by WHO in May 2023.

On-site GMP and GDP inspections have restarted after being postponed or carried out remotely during the pandemic.

However, a considerable number of postponed inspections still need to be carried out.

The validity of GMP and GDP certificates was extended until the end of 2023. The GMP/GDP Inspectors Working Group has decided to continue the extension of the validity date until 2024 or the conclusion of the next on-site inspection, whichever comes first, except where clarifying remarks in the document state otherwise. 

Meanwhile, competent authorities will perform risk-based supervision of sites, either by on-site inspections or distant assessments , and based on the outcome may continue to issue, withdraw or restrict GMP and GDP certificates, as appropriate.

The inspections will be prioritised based on risk, so that the highest priority manufacturers, such as manufacturers of sterile product and biological products , and wholesale distributors are inspected first. In addition, inspections will be prioritised depending on the date of the last inspection.

Questions about the validity date of a GMP or GDP certificate should be addressed to the competent authority that issued the certificate. 

It is incumbent upon manufacturers , importers and distributors to continue complying with GMP and GDP as appropriate. 

Supervisory authorities will remain vigilant to ensure the quality of medicines that are made available to patients in the EEA.

Inspections (including distant assessments) may be carried out at any time. In case of serious non-compliance , appropriate regulatory actions will be triggered.

The guidance was agreed by the GMP/GDP Inspectors Working Group  coordinated by EMA. It will be updated when there is additional information available.

The Agency has a coordinating role for GMP inspections of manufacturing sites for medicines whose marketing authorisation in the EU is submitted through the centralised procedure or as part of a referral procedure.

The Agency also plays a key role in coordinating and harmonising GMP activities at an EU level. It is involved in:

  • coordinating the preparation of new and revised guidance on GMP;
  • ensuring common interpretation of EU GMP requirements and related technical issues;
  • developing EU-wide procedures on GMP inspections and related activities;
  • facilitating cooperation between Member States for inspections of manufacturers in third countries.

Marketing authorisation holders and applicants need to use EMA's IRIS system  to communicate with EMA on  GMP inspections  requested by the Agency’s scientific committees.

Using IRIS for GMP inspections improves efficiency by harmonising and automating processes and re-using master data held by EMA. It also simplifies retrieving and reporting data.

More information on the use of EMA's IRIS system:

  • IRIS system

Guidance for applicants/MAHs involved in GMP, GCP and GVP inspections coordinated by EMA

IRIS guide for applicants - How to create and submit scientific applications, for industry and individual applicants

Legal framework and guidance

These legal instruments lay down the principles and guidelines of GMP in the EU:

  • Regulation No. 1252/2014  applying to active substances for human use;
  • Directive 2001/83/EC  and Directive (EU) 2017/1572 , applying to medicines for human use;
  • Directive 91/412/EEC  and Regulation (EU) 2019/6 applying to medicines for veterinary use.
  • Directive 2001/20/EC and Regulation (EU) 536/2014 applying to Investigational medicinal products.

The EU GMP guidelines provide interpretation of these principles and guidelines, supplemented by a series of annexes that modify or augment the detailed guidelines for certain types of product, or provide more specific guidance on a particular topic.

The GMP / Good Distribution Practice (GDP) Inspectors Working Group provides additional interpretation of the EU GMP guidelines in the form of questions and answers (Q&As) .

Annex 1: Manufacture of Sterile Medicinal Products was revised in August 2022. It comes into operation on 25 August 2023 except for point 8.123 which is postponed until 25 August 2024.

Manufacturing authorisation

Manufacturers and importers located in the European Economic Area (EEA) must hold an authorisation issued by the national competent authority of the Member State where they carry out these activities.

They must comply with EU GMP to obtain a manufacturing or import authorisation. They can ensure that they meet all their legal obligations by following the EU GMP guidelines.

Importers are responsible to ensure that the third country manufacturer they are importing from comply with GMP.

Marketing authorisation applicants are responsible to ensure that the proposed manufacturing sites included in the marketing authorisation application comply with GMP. For more information, see section 5.2 Inspections of the Pre-authorisation guidance .

Registration of manufacturers of active substances

Manufacturers of active substances intended for the manufacture of human medicines for the EU market must register with the national competent authority of the Member State where they are located.

Active substance manufacturers must comply with GMP. In addition, the manufacturer of the finished product is obliged to ensure that the active substances they use have been manufactured in compliance with GMP.

Importers of active substances intended for the EU market are also required to register. In addition, each consignment needs to be accompanied by a confirmation by the competent authority of the country where it is produced that it conforms to GMP standards equivalent to those in the EU, unless a waiver applies.

Responsibility for inspections

In the EU, national competent authorities are responsible for inspecting manufacturing sites located within their own territories.

Manufacturing sites outside the EU are inspected by the national competent authority of the Member State where the EU importer is located, unless a mutual recognition agreement (MRA) is in place between the EU and the country concerned. If an MRA applies, the authorities mutually rely on each other's inspections.

If products are imported directly into more than one Member State from a manufacturing site outside the EU, there may be more than one national competent authority responsible for inspecting it. EMA facilitates cooperation between the authorities concerned in supervising the site.

EU competent authorities plan routine inspections following a risk-based approach, or if there is suspicion of non-compliance.

EudraGMDP is a publicly accessible EU database which contains manufacturing and import authorisations, registration of active substance manufacturers, GMP certificates and non-compliance statements.

After inspecting a manufacturing site, EU competent authorities issue a GMP certificate or a non-compliance statement, which is entered in the EudraGMDP database.

EMA chairs and provides the secretariat for the GMP/GDP Inspectors Working Group of senior inspectors appointed by all the EEA competent authorities. It meets at EMA four times a year.

The European Commission and observers from EU accession countries, mutual recognition partner authorities, the European Directorate for the Quality of Medicines and HealthCare and the World Health Organization also attend the working group's meetings.

The group provides a forum for harmonisation and discussion of common issues, such as:

  • updates or amendments to the EU GMP guidelines;
  • the compilation of Union procedures;
  • harmonised interpretation of GMP and related requirements.

Compilation of Union procedures

EMA maintains a compilation of GMP and good distribution practice (GDP) inspection-related procedures and forms agreed by all Member States. This facilitates cooperation between EU Member States and supports harmonisation and exchange of inspection-related information.

It covers the basis for national procedures that form part of the national inspectorates' quality systems:

Compilation of Union procedures on inspections and exchange of information

EMA published the Word and PDF versions of some of the templates for the convenience of inspectorates

The forms and templates should be downloaded and saved first before being completed, using for example “Save target as” function. To report any technical issues with the form, please use the  EMA Service Desk  portal.

  • Rapid alert notification of a quality defect / recall
  • Follow-up and non-urgent information for quality defects
  • Good-manufacturing-practice inspection report - Community format

Revision of template for serious GMP non-compliance

EMA's GMP/GDP Inspectors Working Group is discussing actions required after an inspection concludes that a manufacturing site does not comply with GMP, specifically where this can lead to a shortage of critical medicines . EMA has held a public consultation on an updated template for GMP non-compliance statement in 2018:

  • Public consultation concerning the European Union template for good manufacturing practice (GMP) non-compliance statement

Inspections for pharmaceutical starting materials

Plasma master file (pmf) inspections.

For products derived from blood or blood plasma, EMA is responsible for coordinating inspections of the blood establishments in which collection, testing, processing, storage and distribution is carried out under the PMF certification procedure.

For more information on the PMF certification procedure, see Plasma master files .

Vaccine antigen master file (VAMF) inspections

EMA is responsible for coordinating inspections of vaccine antigen manufacturing sites under the VAMF certification procedure.

For more information on the VAMF certification procedure, see Vaccine antigens .

  • Mutual recognition agreements

The EU has signed mutual recognition agreements on GMP inspections with regulatory authorities outside the EU. This allows EU authorities and their counterparts to:

  • rely on each other's GMP inspections;
  • waive batch testing of products on entry into their territories;
  • share information on inspections and quality defects.
  • The scope of each agreement differs.

More information

  • GMP/GDP inspectors working group
  • Questions and answers: Good manufacturing practice
  • International collaboration

Related content

  • Good distribution practice
  • Medicine shortages

Related EU legislation

  • Regulation No. 1252/2014
  • Directive 2003/94/EC
  • Directive 91/412/EEC

Publications

  • PDA Journal: GMP oversight of medicines manufacturers in the EU

Contact point

  • Regulatory and procedural guidance

Share this page

How useful do you find this page.

IMAGES

  1. PPT

    research is based on valid procedures and principles

  2. Reliability vs. Validity in Research

    research is based on valid procedures and principles

  3. Method of research

    research is based on valid procedures and principles

  4. Research Ethics: Definition, Principles and Advantages

    research is based on valid procedures and principles

  5. PPT

    research is based on valid procedures and principles

  6. Preliminary Research Strategies

    research is based on valid procedures and principles

VIDEO

  1. Gottlieb discusses VALID Act as a framework for #AI, IVDs, & LDTs. #Oncology #Research #FriendsDx

  2. #9- Research Process ( Finding and formulating research question )

  3. What is Software Engineering

  4. Research Methods

  5. Understanding "Impartial Trial" in English

  6. ANSWERING TECHNIQUE FOR PROBLEM QUESTION ON THE TOPIC OF VALID PROCEDURES OF REPOSSESSION BY OWNERS

COMMENTS

  1. RESEARCH2: Characteristics of Research Flashcards

    research is based on valid procedures and principles. Cyclical. research is a cyclical process. it starts with a problem and ends with a problem. analytical. research utilizes proven analytical procedures in gathering data, whether historical, descriptive, experimental or case study. replicability. the research designs and procedures are ...

  2. What is Research?

    Empirical - based on proven scientific methods derived from real-life observations and experiments. Logical - follows sequential procedures based on valid principles. Cyclic - research begins with a question and ends with a question, i.e. research should lead to a new line of questioning.

  3. Ten simple rules for good research practice

    Introduction. The lack of research reproducibility has caused growing concern across various scientific fields [1-5].Today, there is widespread agreement, within and outside academia, that scientific research is suffering from a reproducibility crisis [6,7].Researchers reach different conclusions—even when the same data have been processed—simply due to varied analytical procedures [8,9].

  4. Scientific Principles and Research Practices

    Until the past decade, scientists, research institutions, and government agencies relied solely on a system of self-regulation based on shared ethical principles and generally accepted research practices to ensure integrity in the research process. Among the very basic principles that guide scientists, as well as many other scholars, are those expressed as respect for the integrity of ...

  5. Guiding Principles for Ethical Research

    This includes considering whether the question asked is answerable, whether the research methods are valid and feasible, and whether the study is designed with accepted principles, clear methods, and reliable practices. Invalid research is unethical because it is a waste of resources and exposes people to risk for no purpose. Fair subject selection

  6. Validity

    Example 1: In an experiment, a researcher manipulates the independent variable (e.g., a new drug) and controls for other variables to ensure that any observed effects on the dependent variable (e.g., symptom reduction) are indeed due to the manipulation. This establishes internal validity.

  7. Chapter 2: Principles of Research

    As we will see later in the book, there are various ways that researchers address the directionality and third-variable problems. The most effective, however, is to conduct an experiment. An experiment is a study in which the researcher manipulates the independent variable. For example, instead of simply measuring how much people exercise, a ...

  8. Research Procedures

    Abstract. This chapter offers a guide on how to implement good research practices in research procedures, following the logical steps in research planning from idea development to the planning of analysis of collected data and data sharing. This chapter argues that sound research methodology is a foundation for responsible science.

  9. Overview of the Research Process

    Research is a rigorous problem-solving process whose ultimate goal is the discovery of new knowledge. Research may include the description of a new phenomenon, definition of a new relationship, development of a new model, or application of an existing principle or procedure to a new context. Research is systematic, logical, empirical, reductive, replicable and transmittable, and generalizable.

  10. Ethical Considerations in Research

    Ethical considerations in research are a set of principles that guide your research designs and practices. These principles include voluntary participation, informed consent, anonymity, confidentiality, potential for harm, and results communication. Scientists and researchers must always adhere to a certain code of conduct when collecting data ...

  11. Characteristics of research

    Characteristics of research. Systematic - follows orderly and sequential procedure. Controlled - all variables except those that are tested/experimented upon are kept constant. Objective, Unbiased, & Logical - all findings are logically based on empirical. Employs quantitative or statistical methods - data are transformed into numerical ...

  12. Understanding Research Ethics

    Research ethics are moral principles that need to be adhered to when conducting a research study as well as when writing a scientific article, with the prime aim of avoiding deception or intent to harm study's participants, the scientific community, and society. Practicing and adhering to research ethics is essential for personal integrity as ...

  13. INTRODUCTION TO RESEARCH Flashcards

    RESEARCH. It is the systematic study of trend or events which involves careful collection, presentation, analysis and interpretation of quantitative data or facts that relates man's think with reality. EMPIRICAL. research is based on direct experience or observation by the researcher. LOGICAL. research is based on valid procedures and principles.

  14. Research Methods

    Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design. When planning your methods, there are two key decisions you will make. First, decide how you will collect data. Your methods depend on what type of data you need to answer your research question:

  15. Characteristics of Research Flashcards

    Systematic. it must take place in organized manner or orderly manner. Empirical. research study is based on direct experience or observation by the researcher. Logical. based on valid procedure and principles. Cyclical. because research starts with a problem and ends with a problem. Analytical.

  16. (PDF) Qualities and Characteristics of a Good Scientific Research

    Research is based on valid . procedures and principles. 3. Cyclical. Research is a cyclical . process because it starts with a . problem and ends with a problem. 4. Analytical.

  17. Understanding the Nature, Characteristics, and Ethics of Inquiry and

    Since research often starts with a problem and finishes with a problem, many of the issues raised in the research recommendations may become new research topics. It is based on direct experience or observation, employs facts and evidence collected through a rigorous investigation, and draws clear conclusions using valid procedures and principles.

  18. Characteristics of Research Flashcards

    Research is based on observations and experimentation of theories. Most researches are based on real-life situations. ... Research follows orderly and sequential procedures, based on valid procedures and principles. Controlled. In research, all variables, except those that are tested/experimented on, are kept constant.

  19. PDF Lesson 2 : Characteristics , Processes and Ethics of Research

    Research is based on valid procedures and principles. Cyclical. Research is a cyclical process because. it starts with a problem and ends with a problem. 4. Analytical. Research utilizes proven analytical procedures in gathering the data, whether historical, descriptive, experimental and case study. 5. Critical.

  20. Research Guides: Evidence-Based Practice: EBP: Principles

    Clinical expertise refers to the clinician's cumulated experience, education, and clinical skills. The patient brings to the encounter his or her own personal and unique concerns, expectations, and values. The best evidence is usually found in clinically relevant research that has been conducted using sound methodology (Sackett, 2002).

  21. Characteristics and Types of Research Flashcards

    empirical. Research is based on observations and experimentation of theories. It takes. into account the direct experiences that fuse the researcher's speculation with reality. Most. researches are based on real-life situations. Systematic. Research follows orderly and sequential procedures, based on valid. procedures and principles.

  22. (PDF) ResearchMethods (draft lecture)

    Research is based on valid procedures and principles. Scientific investigation is done in an. ... valid research makes it important for decision-making. 1. 3) Cyclical. Research is a cyclical process.

  23. Good manufacturing practice

    The validity of GMP and GDP certificates was extended until the end of 2023. ... competent authorities will perform risk-based supervision of sites, either by on-site inspections or distant assessments, and based on the outcome may continue to issue, ... These legal instruments lay down the principles and guidelines of GMP in the EU:

  24. Characteristics of Good Research Flashcards

    Research exhibits careful & precise judgement. Methodical. Research is conducted in a methodical manner without bias using systematic method and procedures. Replicability. The research design and procedures are replicated or repeated to enable the researcher to arrive at valid and conclusive results. Study with Quizlet and memorize flashcards ...

  25. Research Flashcards

    Research is based on valid procedures and principles. Cyclical. Research is a cyclical process because it starts with a problem and ends with a problem. ... The research design and procedures are replicated or repeated to enable the researcher to arrive at valid and conclusive results.