Designing a research project: randomised controlled trials and their principles

Affiliation.

  • 1 North Bristol NHS Trust, Frenchay Hospital, Frenchay Park Road, Bristol BS16 1LE, UK. [email protected]
  • PMID: 12642531
  • PMCID: PMC1726034
  • DOI: 10.1136/emj.20.2.164

Publication types

  • Clinical Protocols
  • Double-Blind Method
  • Outcome Assessment, Health Care
  • Quality Control
  • Randomized Controlled Trials as Topic / methods*
  • Research Design*

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Volume 20, Issue 2
  • Designing a research project: randomised controlled trials and their principles
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • J M Kendall
  • Correspondence to:
 Dr J M Kendall, North Bristol NHS Trust, Frenchay Hospital, Frenchay Park road, Bristol BS16 1LE, UK; 
 frenchayed{at}cableinet.co.uk

The sixth paper in this series discusses the design and principles of randomised controlled trials.

  • randomised controlled trials

https://doi.org/10.1136/emj.20.2.164

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

The randomised control trial (RCT) is a trial in which subjects are randomly assigned to one of two groups: one (the experimental group) receiving the intervention that is being tested, and the other (the comparison group or control) receiving an alternative (conventional) treatment (fig 1). The two groups are then followed up to see if there are any differences between them in outcome. The results and subsequent analysis of the trial are used to assess the effectiveness of the intervention, which is the extent to which a treatment, procedure, or service does patients more good than harm. RCTs are the most stringent way of determining whether a cause-effect relation exists between the intervention and the outcome. 1

  • Download figure
  • Open in new tab
  • Download powerpoint

The randomised control trial.

This paper discusses various key features of RCT design, with particular emphasis on the validity of findings. There are many potential errors associated with health services research, but the main ones to be considered are bias, confounding, and chance. 2

Bias is the deviation of results from the truth, due to systematic error in the research methodology. Bias occurs in two main forms: (a) selection bias , which occurs when the two groups being studied differ systematically in some way, and (b) observer/information bias , which occurs when there are systematic differences in the way information is being collected for the groups being studied.

A confounding factor is some aspect of a subject that is associated both with the outcome of interest and with the intervention of interest. For example, if older people are less likely to receive a new treatment, and are also more likely for unrelated reasons to experience the outcome of interest, (for example, admission to hospital), then any observed relation between the intervention and the likelihood of experiencing the outcome would be confounded by age.

Chance is a random error appearing to cause an association between an intervention and an outcome. The most important design strategy to minimise random error is to have a large sample size.

These errors have an important impact on the interpretation and generalisability of the results of a research project. The beauty of a well planned RCT is that these errors can all be effectively reduced or designed out (see box 1). The appropriate design strategies will be discussed below.

Box 1 Features of a well designed RCT

The sample to be studied will be appropriate to the hypothesis being tested so that any results are appropriately generalisable. The study will recruit sufficient patients to allow it to have a high probability of detecting a clinicaly important difference between treatments if a difference truly exists.

There will be effective (concealed) randomisation of the subjects to the intervention/control groups (to eliminate selection bias and minimise confounding variables).

Both groups will be treated identically in all respects except for the intervention being tested and to this end patients and investigators will ideally be blinded to which group an individual is assigned.

The investigator assessing outcome will be blinded to treatment allocation.

Patients are analysed within the group to which they were allocated, irrespective of whether they experienced the intended intervention (intention to treat analysis).

Analysis focuses on testing the research question that initialy led to the trial (that is, according to the a priori hypothesis being tested), rather than “trawling” to find a significant difference.

GETTING STARTED: DEVELOPING A PROTOCOL FROM THE INITIAL HYPOTHESIS

Analytical studies need a hypothesis that specifies an anticipated association between predictor and outcome variables (or no association, as in a null hypothesis ), so that statistical tests of significance can be performed. 3 Good hypotheses are specific and formulated in advance of commencement (a priori) of the study. Having chosen a subject to research and specifically a hypothesis to be tested, preparation should be thorough and is best documented in the form of a protocol that will outline the proposed methodology. This will start with a statement of the hypothesis to be tested, for example: “...that drug A is more efficacious in reducing the diastolic blood pressure than drug B in patients with moderate essential hypertension.” An appropriate rationale for the study will follow with a relevant literature review, which is focused on any existing evidence relating to the condition or interventions to be studied.

The subject to be addressed should be of clinical, social, or economic significance to afford relevance to the study, and the hypothesis to be evaluated must contain outcomes that can be accurately measured. The subsequent study design (population sampling, randomisation, applying the intervention, outcome measures, analysis, etc) will need to be defined to permit a true evaluation of the hypothesis being tested. In practice, this will be the best compromise between what is ideal and what is practical.

Writing a thorough and comprehensive protocol in the planning stage of the research project is essential. Peer review of a written protocol allows others to criticise the methodology constructively at a stage when appropriate modification is possible. Seeking advice from experienced researchers, particularly involving a local research and development support unit, or some other similar advisory centre, can be very beneficial. It is far better to identify and correct errors in the protocol at the design phase than to try to adjust for them in the analysis phase. Manuscripts rarely get rejected for publication because of inappropriate analysis, which is remediable, but rather because of design flaws.

There are several steps in performing an RCT, all of which need to be considered while developing a protocol. The first is to choose an appropriate (representative) sample of the population from which to recruit. Having measured relevant baseline variables, the next task is to randomise subjects into one of two (or more) groups, and subsequently to perform the intervention as appropriate to the assignment of the subject. The pre-defined outcome measures will then be recorded and the findings compared between the two groups, with appropriate quality control measures in place to assure quality data collection. Each of these steps, which can be tested in a pilot study, has implications for the design of the trial if the findings are to be valid. They will now be considered in turn.

CHOOSING THE RIGHT POPULATION

This part of the design is crucial because poor sampling will undermine the generalisability of the study or, even worse, reduce the validity if sampling bias is introduced. 4 The task begins with deciding what kind of subjects to study and how to go about recruiting them. The target population is that population to which it is intended to apply the results. It is important to set inclusion and exclusion criteria defining target populations that are appropriate to the research hypothesis. These criteria are also typically set to make the researchers’ task realistic, for within the target population there must be an accessible/appropriate sample to recruit.

The sampling strategy used will determine whether the sample actually studied is representative of the target population. For the findings of the study to be generalisable to the population as a whole, the sample must be representative of the population from which it is drawn. The best design is consecutive sampling from the accessible population (taking every patient who meets the selection criteria over the specified time period). This may produce an excessively large sample from which, if necessary, a subsample can be randomly drawn. If the inclusion criteria are broad, it will be easy to recruit study subjects and the findings will be generalisable to a comparatively large population. Exclusion criteria need to be defined and will include such subjects who have conditions which may contraindicate the intervention to be tested, subjects who will have difficulty complying with the required regimens, those who cannot provide informed consent, etc.

Summary: population sampling

The study sample must be representative of the target population for the findings of the study to be generalisable.

Inclusion and exclusion criteria will determine who will be studied from within the accessible population.

The most appropriate sampling strategy is normally consecutive sampling, although stratified sampling may legitimately be required.

A sample size calculation and pilot study will permit appropriate planning in terms of time and money for the recruitment phase of the main study.

Follow CONSORT guidelines on population sampling. 6

In designing the inclusion criteria, the investigator should consider the outcome to be measured; if this is comparatively rare in the population as a whole, then it would be appropriate to recruit at random or consecutively from populations at high risk of the condition in question ( stratified sampling). The subsamples in a stratified sample will draw disproportionately from groups that are less common in the population as a whole, but of particular relevance to the investigator.

Other forms of sampling where subjects are recruited who are easily accessible or appropriate, ( convenience or judgmental sampling) will have advantages in terms of cost, time, and logistics, but may produce a sample that is not representative of the target population and it is likely to be dificult to define exactly who has and has not been included.

Having determined an appropriate sample to recruit, it is necessary to estimate the size of the sample required to allow the study to detect a clinically important difference between the groups being compared. This is performed by means of a sample size calculation . 5 As clinicians, we must be able to specify what we would consider to be a clinically significant difference in outcome. Given this information, or an estimate of the effect size based on previous experience (from the literature or from a pilot study), and the design of the study, a statistical adviser will be able to perform an appropriate sample size calculation. This will determine the required sample size to detect the pre-determined clinically significant difference to a certain degree of power. As previously mentioned, early involvement of an experienced researcher or research support unit in the design stage is essential in any RCT.

After deciding on the population to be studied and the sample size required, it will now be possible to plan the appropriate amount of time (and money) required to collect the data necessary. A limited pilot of the methods is essential to gauge recruitment rate and address in advance any practical issues that may arise once data collection in the definitive study is underway. Pilot studies will guide decisions about designing approaches to recruitment and outcome measurement. A limited pilot study will give the investigator an idea of what the true recruitment rate will be (not just the number of subjects available, but also their willingness to participate). It may be even more helpful in identifying any methodological issues related to applying the intervention or measuring outcome variables (see below), which can be appropriately addressed.

RANDOMISATION: THE CORNERSTONE OF THE RCT

Various baseline characteristics of the subjects recruited should be measured at the stage of initial recruitment into the trial. These will include basic demographic observations, such as name, age, sex, hospital identification, etc, but more importantly should include any important prognostic factors. It will be important at the analysis stage to show that these potential confounding variables are equally distributed between the two groups; indeed, it is usual practice when reporting an RCT to demonstrate the integrity of the randomisation process by showing that there is no significant difference between baseline variables (following CONSORT guidelines). 6

The random assignment of subjects to one or another of two groups (differing only by the intervention to be studied) is the basis for measuring the marginal difference between these groups in the relevant outcome. Randomisation should equally distribute any confounding variables between the two groups, although it is important to be aware that differences in confounding variables may arise through chance.

Randomisation is one of the cornerstones of the RCT 7 and a true random allocation procedure should be used. It is also essential that treatment allocations are concealed from the investigator until recruitment is irrevocable, so that bias (intentional or otherwise) cannot be introduced at the stage of assigning subjects to their groups. 8 The production of computer generated sets of random allocations, by a research support unit (who will not be performing data collection) in advance of the start of the study, which are then sealed in consecutively numbered opaque envelopes, is an appropriate method of randomisation. Once the patient has given consent to be included in the trial, he/she is then irreversibly randomised by opening the next sealed envelope containing his/her assignment.

An alternative method, particularly for larger, multicentre trials is to have a remote randomisation facility. The clinician contacts this facility by telephone when he is ready to randomise the next patient; the initials and study number of the patient are read to the person performing the randomisation, who records it and then reads back the randomisation for that subject.

Studies that involve small to moderate sample sizes (for example, less than 50 per group) may benefit from “blocked” and/or “stratified” randomisation techniques. These methods will balance (where chance alone might not) the groups in terms of the number of subjects they contain, and in the distribution of potential confounding variables (assuming, of course, that these variables are known before the onset of the trial). They are the design phase alternative to statistically adjusting for confounding variables in the analysis phase, and are preferred if the investigator intends to carry out subgroup analysis (on the basis of the stratification variable).

Blocked randomisation is a technique used to ensure that the number of subjects assigned to each group is equally distributed. Randomisation is set up in blocks of a pre-determined set size (for example 6, 8, 10, etc). Randomisation for a block size of 10 would proceed normally until five assignments had been made to one group, and then the remaining assignments would be to the other group until the block of 10 was complete. This means that for a sample size of 80 subjects, exactly 40 would be assigned to each group. Block size must be blinded from the investigator performing the study and, if the study is non-blinded, the block sizes should vary randomly (otherwise the last allocation(s) in a block would, in effect, be unconcealed).

Stratified randomisation is a technique for ensuring that an important baseline variable (potential confounding factor) is more evenly distributed between the two groups than chance alone might otherwise assure. In examining the effect of a treatment for cardiac failure, for example, the degree of existing cardiac failure will be a baseline variable predicting outcome, and so it is important that this is the same in the two groups. To achieve this, the sample can be stratified at baseline into patients with mild, moderate, or severe cardiac failure, and then randomisation occurs within each of these “strata”. There is a limited number of baseline variables that can be balanced by stratification because the numbers of patients within a stratum are reduced. In the above example, to stratify also for age, previous infarction, and the co-existence of diabetes would be impractical.

Summary: randomisation

The random assignment of subjects into one of two groups is the basis for establishing a causal interpretation for an intervention.

Effective randomisation will minimise confounding variables that exist at the time of randomisation.

Randomisation must be concealed from the investigator.

Blocked randomisation may be appropriate for smaller trials to ensure equal numbers in each group.

Stratified randomisation will ensure that a potential baseline confounding variable is equally distributed between the two groups.

Analysis of results should occur based on the initial randomisation, irrespective of what may subsequently actually have happened to the subject (that is, “intention to treat analysis”).

Sample attrition (“drop outs”), once subjects have consented and been randomised, may be an important factor. Patients may refuse to continue with the trial, they may be lost to analysis for whatever reason, and there may be changes in the protocol (or mistakes) subsequent to randomisation, even resulting in the patient receiving the wrong treatment. This is, in fact, not that uncommon: a patient randomised to have a minimally invasive procedure may need to progress to an open operation, for example, or a patient assigned to medical treatment may require surgery at a later stage. In the RCT, the analysis must include an unbiased comparison of the groups produced by the process of randomisation, based on all the people who were randomised; this is known as analysis by intention to treat . Intention to treat analysis depends on having outcomes for all subjects, so even if patients “drop out”, it is important to try to keep them in the trial if only for outcome measurement. This avoids the introduction of bias as a consequence of potentialy selectively dropping patients from previously randomised/balanced groups.

APPLYING THE INTERVENTION AND MEASURING OUTCOME: THE IMPORTANCE OF BLINDING

After randomisation there will be two (or more) groups, one of which will receive the test intervention and another (or more) which receives a standard intervention or placebo. Ideally, neither the study subjects, nor anybody performing subsequent measurements and data collection, should be aware of the study group assignment. Effective randomisation will eliminate confounding by variables that exist at the time of randomisation. Without effective blinding, if subject assignment is known by the investigator, bias can be introduced because extra attention may be given to the intervention group (intended or otherwise). 8 This would introduce variables into one group not present in the other, which may ultimately be responsible for any differences in outcome observed. Confounding can therefore also occur after randomisation. Double blinding of the investigator and patient (for example, by making the test treatment and standard/placebo treatments appear the same) will eliminate this kind of confounding, as any extra attentions should be equally spread between the two groups (with the exception, as for randomisation, of chance maldistributions).

While the ideal study design will be double blind, this is often difficult to achieve effectively, and is sometimes not possible (for example, surgical interventions). Where blinding is possible, complex (and costly) arrangements need to be made to manufacture placebo that appears similar to the test drug, to design appropriate and foolproof systems for packaging and labelling, and to have a system to permit rapid unblinding in the event of any untoward event causing the patient to become unwell. The hospital pharmacy can be invaluable in organising these issues. Blinding may break down subsequently if the intervention has recognisable side effects. The effectiveness of the blinding can be systematically tested after the study is completed by asking investigators to guess treatment assignments; if a significant proportion are able to correctly guess the assignment, then the potential for this as a source of bias should be considered.

Summary: intervention and outcome

Blinding at the stage of applying the intervention and measuring the outcome is essential if bias (intentional or otherwise) is to be avoided.

The subject and the investigator should ideally be blinded to the assignment (double blind), but even where this is not possible, a blinded third party can measure outcome.

Blinding is achieved by making the intervention and the control appear similar in every respect.

Blinding can break down for various reasons, but this can be systematically assessed.

Continuous outcome variables have the advantage over dichotomous outcome variables of increasing the power of a study, permitting a smaller sample size.

Once the intervention has been applied, the groups will need to be followed up and various outcome measures will be performed to evaluate the effect or otherwise of that intervention. The outcome measures to be assessed should be appropriate to the research question, and must be ones that can be measured accurately and precisely. Continuous outcome variables (quantified on an infinite arithmetic scale, for example, time) have the advantage over dichotomous outcome variables (only two categories, for example, dead or alive) of increasing the power of a study, permitting a smaller sample size. It may be desirable to have several outcome measures evaluating different aspects of the results of the intervention. It is also necessary to design outcome measures that will detect the occurrence of specified adverse effects of the intervention.

It is important to emphasise, as previously mentioned, that the person measuring the outcome variables (as well as the person applying the intervention) should be blinded to the treatment group of the subject to prevent the introduction of bias at this stage, particularly when the outcome variable requires any judgement on the part of the observer. Even if it has not been possible to blind the administration of the intervention, it should be possible to design the study so that outcome measurement is performed by someone who is blinded to the original treatment assignment.

QUALITY CONTROL

A critical aspect of clinical research is quality control. Quality control is often overlooked during data collection, a potentially tedious and repetitive phase of the study, which may lead subsequently to errors because of missing or inaccurate measurements. Essentially, quality control issues occur in clinical procedures, measuring outcomes, and handling data. Quality control begins in the design phase of the study when the protocol is being written and is first evaluated in the pilot study, which will be invaluable in testing the proposed sampling strategy, methods for data collection and subsequent data handling.

Once the methods part of the protocol is finalised, an operations manual can be written that specifically defines how to recruit subjects, perform measurements, etc. This is essential when there is more than one investigator, as it will standardise the actions of all involved. After allowing all those involved to study the operations manual, there will be the opportunity to train (and subsequently certify) investigators to perform various tasks uniformly.

Ideally, any outcome measurement taken on a patient should be precise and reproducible; it should not depend on the observer who took the measurement. 4 It is well known, for example, that some clinicians in their routine medical practice record consistently higher blood pressure values than others. Such interobserver variation in the setting of a clinical trial is clearly unacceptable and steps must be taken to avoid it. It may be possible, if the trial is not too large, for all measurements to be performed by the same observer, in which case the problem is avoided. However, it is often necessary to use multiple observers, especially in multicentre trials. Training sessions should be arranged to ensure that observers (and their equipment) can produce the same measurements in any given subject. Repeat sessions may be necessary if the trial is of long duration. You should try to use as few observers as possible without exhausting the available staff. The trial should be designed so that any interobserver variability cannot bias the results by having each observer evaluate patients in all treatment groups.

Inevitably, there will be a principal investigator; this person will be responsible for assuring the quality of data measurement through motivation, appropriate delegation of responsibility, and supervision. An investigators’ meeting before the study starts and regular visits to the team members or centres by the principal investigator during data collection, permit communication, supervision, early detection of problems, feedback and are good for motivation.

Quality control of data management begins before the start of the study and continues during the study. Forms to be used for data collection should be appropriately designed to encourage the collection of good quality data. They should be user friendly, self explanatory, clearly formatted, and collect only data that is needed. They can be tested in the pilot. Data will subsequently need to be transcribed onto a computer database from these forms. The database should also be set up so that it is similar in format to the forms, allowing for easy transcription of information. The database can be pre-prepared to accept only variables within given permissible ranges and that are consistent with previous entries and to alert the user to missing values. Ideally, data should be entered in duplicate, with the database only accepting data that are concordant with the first entry; this, however, is time consuming, and it may be adequate to check randomly selected forms with a printout of the corresponding datasheet to ensure transcription error is minimal, acting appropriately if an unacceptably high number of mistakes are discovered.

Once the main phase of data collection has begun, you should try to make as few changes to the protocol as possible. In an ideal world, the pilot study will have identified any issues that will require a modification of the protocol, but inevitably some problem, minor or major, will arise once the study has begun. It is better to leave any minor alterations that are considered “desirable” but not necessary and resist the inclination to make changes. Sometimes, more substantive issues are highlighted and protocol modification is necessary to strengthen the study. These changes should be documented and disseminated to all the investigators (with appropriate changes made to the operations manual and any re-training performed as necessary). The precise date that the revision is implemented is noted, with a view to separate analysis of data collected before and after the revision, if this is considered necessary by the statistical advisor. Such revisions to the protocol should only be undertaken if, after careful consideration, it is felt that making the alteration will significantly improve the findings, or not changing the protocol will seriously jeopardise the project. These considerations have to be balanced against the statistical difficulty in analysis after protocol revision.

...SOME FINAL THOUGHTS

A well designed, methodologically sound RCT evaluating an intervention provides strong evidence of a cause-effect relation if one exists; it is therefore powerful in changing practice to improve patient outcome, this being the ultimate goal of research on therapeutic effectiveness. Conversely, poorly designed studies are dangerous because of their potential to influence practice based on flawed methodology. As discussed above, the validity and generalisability of the findings are dependent on the study design.

Summary: quality control

An inadequate approach to quality control will lead to potentially significant errors due to missing or inaccurate results.

An operations manual will allow standardisation of all procedures to be performed.

To reduce interobserver variability in outcome measurement, training can be provided to standardise procedures in accordance with the operations manual.

Data collection forms should be user friendly, self explanatory, and clearly formatted, with only truly relevant data being collected.

Subsequent data transfer onto a computerised database can be safe guarded with various measures to reduce transcription errors.

Protocol revisions after study has started should be avoided if at all possible, but, if necessary, should be appropriately documented and dated to permit separate analysis.

Early involvement of the local research support unit is essential in developing a protocol. Subsequent peer review and ethical committee review will ensure that it is well designed, and a successful pilot will ensure that the research goals are practical and achievable.

Delegate tasks to those who have the expertise; for example, allow the research support unit to perform the randomisation, leave the statistical analysis to a statistician, and let a health economist advise on any cost analysis. Networking with the relevant experts is invaluable in the design phase and will contribute considerably to the final credence of the findings.

Finally, dissemination of the findings through publication is the final peer review process and is vital to help others act on the available evidence. Writing up the RCT at completion, like developing the protocol at inception, should be thorough and detailed 9 (following CONSORT guidelines 6 ), with emphasis not just on findings, but also on methodology. Potential limitations or sources of error should be discussed so that the readership can judge for themselves the validity and generalisability of the research. 10

  • ↵ Sibbald B , Roland M. Understanding controlled trials: Why are randomised controlled trials important? BMJ 1998 ; 316 : 201 . OpenUrl FREE Full Text
  • ↵ Pocock SJ . Clinical trials: a practical approach. Chichester: Wiley, 1984 .
  • ↵ Hulley SB , Cunnings SR. Designing clinical research—an epidemiological approach . Chicago: Williams and Wilkins, 1988 .
  • ↵ Bowling A . Researching methods in health: investigating health and health services. Buckingham: Open University Press, 1997 .
  • ↵ Lowe D . Planning for medical research: a practical guide to research methods. Middlesborough: Astraglobe, 1993 .
  • ↵ Begg C , Cho M, Eastwood S, et al . Improving the quality of reporting of randomised controlled trials: the CONSORT statement. JAMA 1996 ; 276 : 637 –9. OpenUrl CrossRef PubMed Web of Science
  • ↵ Altman DG . Randomisation. BMJ 1991 ; 302 : 1481 –2.
  • ↵ Schultz KF , Chalmers I, Hayes RJ, et al . Empirical evidence of bias: dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA 1995 ; 273 : 408 –12. OpenUrl CrossRef PubMed Web of Science
  • ↵ The Standards of Reporting Trials Group . A proposal for structured reporting of randomised controlled trials. JAMA 1994 ; 272 : 1926 –31. OpenUrl CrossRef PubMed Web of Science
  • ↵ Altman DG . Better reporting of randomised controlled trials: the CONSORT statement. BMJ 1996 ; 313 : 570 –1. OpenUrl FREE Full Text

Further reading

  • Sackett DL, Haynes RB, Guyatt GH, et al . Clinical epidemiology: a basic science for clinical medicine. 2nd edn. Toronto: Little, Brown, 1991.
  • Sackett DL, Richardson WS, Rosenberg W, et al . Evidence-based medicine: how to practice and teach EBM. Edinburgh: Churchill Livingstone, 1997.
  • Polgar S. Introduction to research in health sciences. 2nd edn. Edinburgh: Churchill Livingstone, 1991.
  • Bland M. An introduction to medical statistics. Oxford: Oxford Medical Publications, 1987.

Read the full text or download the PDF:

Study Design 101: Randomized Controlled Trial

  • Case Report
  • Case Control Study
  • Cohort Study
  • Randomized Controlled Trial
  • Practice Guideline
  • Systematic Review
  • Meta-Analysis
  • Helpful Formulas
  • Finding Specific Study Types

A study design that randomly assigns participants into an experimental group or a control group. As the study is conducted, the only expected difference between the control and experimental groups in a randomized controlled trial (RCT) is the outcome variable being studied.

  • Good randomization will "wash out" any population bias
  • Easier to blind/mask than observational studies
  • Results can be analyzed with well known statistical tools
  • Populations of participating individuals are clearly identified

Disadvantages

  • Expensive in terms of time and money
  • Volunteer biases: the population that participates may not be representative of the whole
  • Loss to follow-up attributed to treatment

Design pitfalls to look out for

An RCT should be a study of one population only.

Was the randomization actually "random", or are there really two populations being studied?

The variables being studied should be the only variables between the experimental group and the control group.

Are there any confounding variables between the groups?

Fictitious Example

To determine how a new type of short wave UVA-blocking sunscreen affects the general health of skin in comparison to a regular long wave UVA-blocking sunscreen, 40 trial participants were randomly separated into equal groups of 20: an experimental group and a control group. All participants' skin health was then initially evaluated. The experimental group wore the short wave UVA-blocking sunscreen daily, and the control group wore the long wave UVA-blocking sunscreen daily.

After one year, the general health of the skin was measured in both groups and statistically analyzed. In the control group, wearing long wave UVA-blocking sunscreen daily led to improvements in general skin health for 60% of the participants. In the experimental group, wearing short wave UVA-blocking sunscreen daily led to improvements in general skin health for 75% of the participants.

Real-life Examples

van Der Horst, N., Smits, D., Petersen, J., Goedhart, E., & Backx, F. (2015). The preventive effect of the nordic hamstring exercise on hamstring injuries in amateur soccer players: a randomized controlled trial. The American Journal of Sports Medicine, 43 (6), 1316-1323. https://doi.org/10.1177/0363546515574057

This article reports on the research investigating whether the Nordic Hamstring Exercise is effective in preventing both the incidence and severity of hamstring injuries in male amateur soccer players. Over the course of a year, there was a statistically significant reduction in the incidence of hamstring injuries in players performing the NHE, but for those injured, there was no difference in severity of injury. There was also a high level of compliance in performing the NHE in that group of players.

Natour, J., Cazotti, L., Ribeiro, L., Baptista, A., & Jones, A. (2015). Pilates improves pain, function and quality of life in patients with chronic low back pain: a randomized controlled trial. Clinical Rehabilitation, 29 (1), 59-68. https://doi.org/10.1177/0269215514538981

This study assessed the effect of adding pilates to a treatment regimen of NSAID use for individuals with chronic low back pain. Individuals who included the pilates method in their therapy took fewer NSAIDs and experienced statistically significant improvements in pain, function, and quality of life.

Related Formulas

  • Relative Risk

Related Terms

Blinding/Masking

When the groups that have been randomly selected from a population do not know whether they are in the control group or the experimental group.

Being able to show that an independent variable directly causes the dependent variable. This is generally very difficult to demonstrate in most study designs.

Confounding Variables

Variables that cause/prevent an outcome from occurring outside of or along with the variable being studied. These variables render it difficult or impossible to distinguish the relationship between the variable and outcome being studied).

Correlation

A relationship between two variables, but not necessarily a causation relationship.

Double Blinding/Masking

When the researchers conducting a blinded study do not know which participants are in the control group of the experimental group.

Null Hypothesis

That the relationship between the independent and dependent variables the researchers believe they will prove through conducting a study does not exist. To "reject the null hypothesis" is to say that there is a relationship between the variables.

Population/Cohort

A group that shares the same characteristics among its members (population).

Population Bias/Volunteer Bias

A sample may be skewed by those who are selected or self-selected into a study. If only certain portions of a population are considered in the selection process, the results of a study may have poor validity.

Randomization

Any of a number of mechanisms used to assign participants into different groups with the expectation that these groups will not differ in any significant way other than treatment and outcome.

Research (alternative) Hypothesis

The relationship between the independent and dependent variables that researchers believe they will prove through conducting a study.

Sensitivity

The relationship between what is considered a symptom of an outcome and the outcome itself; or the percent chance of not getting a false positive (see formulas).

Specificity

The relationship between not having a symptom of an outcome and not having the outcome itself; or the percent chance of not getting a false negative (see formulas).

Type 1 error

Rejecting a null hypothesis when it is in fact true. This is also known as an error of commission.

Type 2 error

The failure to reject a null hypothesis when it is in fact false. This is also known as an error of omission.

Now test yourself!

1. Having a volunteer bias in the population group is a good thing because it means the study participants are eager and make the study even stronger.

a) True b) False

2. Why is randomization important to assignment in an RCT?

a) It enables blinding/masking b) So causation may be extrapolated from results c) It balances out individual characteristics between groups. d) a and c e) b and c

Evidence Pyramid - Navigation

  • Meta- Analysis
  • Case Reports
  • << Previous: Cohort Study
  • Next: Practice Guideline >>

Creative Commons License

  • Last Updated: Sep 25, 2023 10:59 AM
  • URL: https://guides.himmelfarb.gwu.edu/studydesign101

GW logo

  • Himmelfarb Intranet
  • Privacy Notice
  • Terms of Use
  • GW is committed to digital accessibility. If you experience a barrier that affects your ability to access content on this page, let us know via the Accessibility Feedback Form .
  • Himmelfarb Health Sciences Library
  • 2300 Eye St., NW, Washington, DC 20037
  • Phone: (202) 994-2850
  • [email protected]
  • https://himmelfarb.gwu.edu

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 12 May 2017

How to design a randomised controlled trial

  • P. Brocklehurst 1 &
  • Z. Hoare 2  

British Dental Journal volume  222 ,  pages 721–726 ( 2017 ) Cite this article

8303 Accesses

14 Citations

5 Altmetric

Metrics details

  • Randomized controlled trials

Explains the PICO statement and the basics of randomisation.

Readers will understand the basics of statistical testing, sample size calculations and the basics of bias.

Readers will also understand the role of Clinical Trials Units and why they are important when considering an experimental design

This practical paper explains how to design an randomised controlled trial (RCT) for those who have little prior knowledge of the topic. It covers the basics of radomisation, statistical testing, sample size caluclations, bias and the role of Clinical Trial Units.

You have full access to this article via your institution.

Similar content being viewed by others

designing a research project randomized controlled trials and their principles

High-throughput prediction of protein conformational distributions with subsampled AlphaFold2

Gabriel Monteiro da Silva, Jennifer Y. Cui, … Brenda M. Rubenstein

designing a research project randomized controlled trials and their principles

Artificial intelligence and illusions of understanding in scientific research

Lisa Messeri & M. J. Crockett

designing a research project randomized controlled trials and their principles

Genomic data in the All of Us Research Program

The All of Us Research Program Genomics Investigators

Introduction

Randomised controlled trials (RCTs) are the workhorse of evidence-based healthcare and the only research design that can demonstrate causality, that is, that an intervention causes a direct change in a clinical outcome. Although they can be complex, the idea at its simplest is to 'create two identical systems into one of which a new component [the intervention] is introduced'. 1 Observations are then made of outcome differences that occur between experimental and control conditions and 'should a change occur, it is attributed to the one difference between them' ( Figs 1 and 2 ). 1 This paper aims to explain how to design an RCT for those who have little prior knowledge of the topic and specifically explore the following areas:

The PICO statement

Randomisation

Trial design

Statistical testing

Sample size calculations

Clinical Trials Units.

figure 1

The experimental approach to evaluation

figure 2

Trials in context

Before you start

Whether you are designing an individually randomised, cluster randomised, stepped wedge or adaptive trial, your research question always returns to your PICO statement. Precision in defining a research question is a key skill; the more precise the research question is, the easier it is to design your study. The PICO statement divides a research question into four basic parts: the patient/population (who are you intending to conduct the study with/on), the intervention, the control and the outcome measure.

P = Population

The first step in developing a well-built question is to identify the patient problem or describe the population group of interest. When identifying the P in PICO it is helpful to ask yourself how you would describe this population group to another person. What are the important characteristics of the group? For example, this could be children, or more specifically young children under the age of five years of age. For example, in the Northern Ireland Caries Prevention In Practice (NIC-PIP) trial, the eligible population was caries free children aged between two and three years of age ( Table 1 ). 2

I = Intervention

Identifying the intervention is the second step in the PICO process. It is important to identify what you plan to do to your population. This may include the use of a specific diagnostic test, treatment, adjunctive therapy, medication or a recommendation to use a product or procedure. When thinking about conducting a randomised controlled trial, the intervention would be the health technology that you intend to test experimentally. In NIC-PIP, this was the delivery of a preventive regime in line with Delivering better oral health . In the iQuaD trial, the intervention was personalised oral hygiene advice ( Table 1 ). 3 Some RCTs also employ more than one active arm simultaneously. For example, the FiCTION trial utilises two active arms: conventional caries management with best practice prevention and biological management of caries with best practice prevention ( Table 1 ). 4

I = Control or comparison

The control or comparison is the third step to take in building your PICO question. This represents the alternative you are planning to compare the intervention to. This can take a number of forms. For example, it could be no active intervention. This is classically known as the 'control group' in an RCT. For example, you might be comparing a new high fluoride toothpaste to prevent dental caries. However, it could equally refer to testing a new intervention compared to existing treatment or two different types of intervention like Atraumatic Restorative Treatment versus the Hall Technique, in what is called a 'head-to-head'. 5

O = Outcome

Determining the primary outcome measure (POM) is the final step in building the PICO question and one of the most important as it has ramifications on how you statistically test for differences between the intervention and the control/comparator. It specifies what you would expect to see, should the intervention be successful. It is important to decide here whether your POM would be measured using a continuous variable or an ordinal one. The difference between these two types of variables is that a continuous variable describes outcomes that are measured on a scale, like height or weight, whereas ordinal variables are categorical in nature and as the name suggests, can be placed in order. For example, if a person is asked about their feelings towards their dental care and the available responses are unsatisfied, neutral or satisfied, this would be an ordinal variable.

Another key aspect to specify when thinking about your POM is its time to expression, that is, how quickly you would expect to see your result. Time to expression has a critical influence on the duration of the trial (and thereby cost) and will obviously vary with the type of disease under investigation. For example, trials evaluating interventions for gingivitis will have a much shorter duration compared to caries trials. In the FiCTION trial, the research question is 'what is the clinical and cost effectiveness of restoration caries in primary teeth, compared to no treatment?' Here, the POM is the incidence of either pain or infection related to dental caries and the follow-up period is three years ( Table 1 ).

When a new RCT is being planned, researchers are said to be in equipoise. This means that we are uncertain whether the new treatment being experimentally tested actually produces a benefit for the participant. This is an ethical position. If we already have evidence that a new treatment is better than another, we should be giving this treatment to the patient already and if we know there is no difference or that the new treatment is harmful, we shouldn't be offering it all to the patient. Consolidated Standards of Reporting Trials (CONSORT) 2010 states 'ideally, participants should be assigned to comparison groups in the trial on the basis of a chance (random) process characterised by unpredictability.' 6 The requirement is there for a reason. Randomisation of the participants is crucial because it allows the principles of statistical theory to stand and as such allows a thorough analysis of the trial data without bias.

So, how do we randomise? Surely putting participants into random groups is as simple as tossing a coin? This is randomisation in its simplest form but in many cases it results in an unbalanced sample. For example, in a small trial of say 50 participants, tossing a fair coin 50 times would result in a 25:25 split only 7.95% of the time!

There are many different types of randomisation. Tossing a coin or using a random number table are examples of simple randomisation. Restricted randomisation uses methods to control the imbalance between the groups; generating a random list AAABBBABABAABB allows participants to be allocated as they arrive to the next treatment on the list. With the list here we know that at the sixth, eighth, tenth and fourteenth participants we have balance in allocation. Stratified randomisation allows us to account for and control certain characteristics within the population of participants such as gender or age (factors that might confound the final effect). It is recommended that stratification should be used sparingly and only in those characteristics that you think would potentially affect your outcome.

When choosing a randomisation method it is important to determine whether the method can accommodate enough treatment groups. For example, tossing a coin would be difficult to implement for a trial with three arms. It is also important to determine how predictable the method is. A deterministic algorithm (not considered randomisation) would allow you to be able to predict what treatment would be allocated next. A static random element would mean that each allocation is made with a pre-determined probability (tossing a coin gives a 50:50 chance of either treatment being assigned). A dynamic element adjusts the probability of being allocated to a treatment based on what has already been allocated in the trial so far. This is the basis of the North Wales Organisation for Randomised Trials in Health's (NWORTH) remote randomisation system. 7

Other considerations include, can the method accommodate for stratification variables and if so, how many? Can the method handle an unequal allocation ratio? Is thresholding used (that is, maximum level of accepted imbalance)? Can the method be implemented sequentially that is, as the patients walk through your door? Is the method complex? Is the method suitable for cluster randomisation? Decisions like these mean that often a Clinical Trials Unit is needed in the design and planning of your trial. Further reading can be found here. 8 , 9

Designing your trial

There are different phases of RCTs. Phase I trials are described as 'first into person', whilst Phase II are slightly larger trials that commonly determine efficacy that is, does the intervention work or not. Phase III trials take this a step further and determine effectiveness that is, does the intervention produce health benefits in the real world. This section will focus on Phase III trial designs. Again, it is important that you consult a statistician at this stage: 'to call in the statistician after the experiment is done may be no more than asking him to perform a post-mortem examination: he may be able to say what the experiment died of'. 10

Feasibility or pilot or full trial

A key question to ask when designing a trial is do you have all the information to inform all the parameters needed. A feasibility study helps you determine whether a definitive trial is 'feasible'. This type of study is often not randomised. This is because the intervention under study is commonly under development and the study plan or intervention itself would change before a definitive study is started. The important outcomes of a feasibility study will be things like the ability to recruit participants, ability to retain participants for the length of time required, the suitability of the proposed outcome measures, and willingness of the participants/clinicians to be involved.

Pilot studies can be thought of as a small version of the definitive trial, that is, your intervention is now established, but there is still some uncertainty about whether your definitive trial can run as planned. A pilot study will assess all the features that a feasibility study does and often the terms are used interchangeably. Further reading on the issue of terminology can be found here. 11 , 12 , 13 With a stable intervention to test, enough information about the difference in the POM you expect (known as the effect size), enough understanding about the time to expression of your chosen POM, confidence about the feasibility of running the trial as designed , then it may be time to design a definitive trial of the intended intervention.

Randomised or non-randomised

A RCT is seen as the 'gold standard' of trial design but there are some situations where randomisation is not possible. Uncontrolled or non-randomised trials are used when randomisation is not possible or is unethical. The results of non-randomised or uncontrolled trials may be considered less reliable as there is an increased risk for errors affecting the outcome of the trial. An example of a non-randomised trial is Lam et al .'s (2010) study of mental-health first aid training. 14 The 'SPIRIT 2013 Statement' provides recommendations for a minimum set of scientific, ethical, and administrative elements that should be addressed in a clinical trial protocol. 15 It is worth remembering that a non-randomised trial will have to be analysed and reported very differently if randomisation is not a key component. All trials should be reported to CONSORT standards and it worth keeping these guidelines in mind during the design process. 10

A key question with RCTs is whether the randomisation is at an individual level or at a cluster level. The individually randomised parallel group design is typically seen as the standard RCT design and remains the favoured approach by funders. An example of an individually randomised parallel group design in dentistry is the FiCTION trial. 5 This is appropriate IF the intervention is to be delivered to an individual and there is no possibility of contamination. However, this is not always possible. For example, if a community based oral health prevention programme was being delivered in a school, it would be difficult to undertake the intervention on one child and not affect another child. The environment in the school and the teachers that undertook the intervention would find it difficult not to influence a child in the control group. In such cases, cluster randomisation would be used. In this example, schools would be the unit of randomisation, not the individual (so a whole school would initiate the intervention and a whole school wouldn't). Two examples of cluster randomised trials in dentistry are NIC-PIP and IQuaD. 2 , 3

Another design of definitive trial is a stepped wedge trial. Here, the unit of randomisation is not the individual or the cluster, but time. This might sound complex, but all this means is that everyone gets the intervention, but just not yet. These designs are not commonly used, but have the advantage that they map on well to a policy roll-out, where everyone will get the intervention. As a result, there are ethical advantages to the approach. For example, if a new practice prevention tool from Public Health England was going to be introduced across England and could be rolled out on a staggered basis, this approach could be undertaken. At specific time-points in the trial (the 'step'), participating practices that are not yet adopting the intervention (who are currently acting as the control) come 'on-line'. The down-side of stepped-wedge trials is that as each new wave of practices adopt the intervention, all the recruited practices in the trial have to have the POM measured again. Another disadvantage of this design is that you must have sufficient time between each step for the disease of interest to express itself. If this was a gingivitis measure, then this would not be too problematic, but if the researchers were examining the impact of the preventive intervention on dental caries in adults, this would mean that the trial would be very long. One example of a dental stepped wedge trial is the SOCLE-II trial. 16 Here, researchers are exploring whether enhanced oral healthcare or usual oral healthcare is the most effective for people in stroke care settings. Rather than randomising the participants into the two groups, they are rolling out the intervention (enhanced oral healthcare), one ward at a time. More information on stepped-wedge trials can be found here. 17 , 18

Demonstrating success

A statistical hypothesis in a trial describes what the researcher expects to see happen to their chosen POM as their intervention is applied to the intervention arm. This assumption may or may not be true. The null hypothesis in a superiority trial assumes that changes in the POM result from chance and that there is no difference between the intervention and the control arm. The alternate hypothesis assumes that changes are influenced by some non-random cause that is, the intervention the researcher has introduced has worked!

Individually, cluster and stepped-wedged designs commonly test a directional hypothesis that the new intervention produces a health benefit compared to the control or an existing intervention ('head-to-head'). These are termed 'superiority designs' and from a statistical perspective, test whether your point estimate (mean if the POM is measured using a continuous variable) lies above or below the 95% confidence interval (CI) in the control arm. However, sometimes our question is whether a new treatment is as good as another treatment or meets a certain standard. Trials that explore these issues are known as 'equivalence' or 'non-inferiority' designs. 'Equivalence' trials determine whether the value for the POM in both arms is not statistically different that is, that the 95% CI of the difference of the two groups lies within an acceptable margin (the equivalence margin). 'Non-inferiority' designs test the difference between two arms and test whether the new intervention is not unacceptably worse than the other that is, that the lower end 95% CI of the difference in POM does not extend below the pre-defined non-inferiority margin. More reading can be found here. 19

Statistical tests commonly quote the 'p' value, which describes the statistical significance of the results. The p-value is the probability of obtaining an effect at least as extreme as the one observed assuming the null is true, therefore the lower the p-value, the more likely that there is a significant difference between the scores. It is generally accepted that any POM being tested is statistically significant if the p-value is below 0.05. If this is the case, the null hypothesis can be rejected. However, it is worth knowing that the more times you test something, the more likely by chance you are going to find something statistically significant. In the cases of multiple testing, consideration should be given to adjusting the level of significance.

Participants

There are three elements to a sample size calculation: the p-value, power and the effect size. The p-value is set commonly at 0.05 (as highlighted above). Power is the probability that you will see an effect IF an effect is there to be seen. Sometimes we don't see a statistically significant effect because quite simply, no effect exists. However, sometimes an effect is there, but we don't have the numbers to see it (this is called being under-powered). For RCTs, we set this probability of detecting an effect IF an effect is there to be seen at 90%. The only element that varies in the power calculation is the effect size that is, this is the main element that the researcher needs to determine in consultation with a statistician. An effect size is a point estimate of the measure of the strength of effect standardised by the variability of the measure that is, the expected difference in your POM between the intervention and control arm (for example, if your POM is measured using a continuous variable, this would commonly be the mean with the variability represented by the standard deviation).

Table 1 and Table 2 provide you with the details that a statistician would need to know before a sample size can be calculated. Table 3 also provides some of the important non-statistical considerations in trial design. More reading can be found here. 20

What about bias?

Common to all trial designs is the need to reduce bias. A bias is a systematic error and can operate in either direction: under or over-estimating the true intervention effect. Bias is caused by flaws in the design of the study and so is not the same as imprecision, which is a random error. Selection bias refers to systematic differences between the intervention and control arm caused by differences in baseline characteristics. This should be removed if the randomisation process was effective. Ensuring that participants are blind to their allocation (where possible) reduces the risk that knowledge of which intervention was received, rather than the intervention itself, affects the outcome. This is called allocation concealment. Detection bias (or ascertainment bias) refers to systematic differences produced by differences in how outcomes are determined. Blinding of outcome assessors may reduce the risk that knowledge of the intervention, rather than the intervention itself, affects the POM. Blinding of outcome assessors is especially important when subjective POMs are used, for example, how nervous were you during your dental treatment?

Attrition bias describes the systematic differences between the intervention and control arm caused by withdrawals from the trial that is, when the participants do not want to take part anymore. This can skew the numbers and mix of participants in each arm. It may also tell you that your trial is not socially acceptable! Reporting bias (or publication bias) refers to systematic differences caused by researchers and journals only reporting positive effects of intervention. 21 This can be seen in the pharmaceutical industry where negative results about the effects of a particular drug can get hidden. 22

Clinical Trials Units

Clinical Trials Units (CTUs) are 'specialist units which have been set up with a specific remit to design, conduct, analyse and publish clinical trials and other well-designed studies' ( https://youtu.be/QvGaGEHgwXg ). 23 Commonly, they have a number of functional areas:

Statistical support (pre, per and post-trial)

Trial management

Quality assurance

Information technology.

CTUs have expertise in the co-ordination of trials, particularly those that involve Investigational Medicinal Products, where compliance with the Medicines Health Regulatory Authority is critical to discharge the expectations in the 'UK Medicines for Human Use (Clinical Trials) Regulations'. 24 Some also provide specialist statistical advice for clinicians. For example, although NWORTH has over £18 million of trials on its portfolio from across the United Kingdom, it is also part-funded by the Welsh Government to provide the Research Design and Conduct Service ( http://nworth-ctu.bangor.ac.uk/research-support-service/index.php.en ).

Most CTUs, but not all, are registered with the United Kingdom Clinical Research Collaboration (UKCRC) and many specialise in specific areas, like Clinical Trials of Investigative Medicinal Products (drug trials). NWORTH has a traditional strength in pragmatic trials and trials of complex interventions. Methodologically, they link with initiatives like TrialForge ( http://www.trialforge.org ) and work to understand how to 'make trials work' (see http://nworth-ctu.bangor.ac.uk/trials.php ).

When preparing for a grant application, researchers are encouraged to approach CTUs early to get help on designing their project. The National Institute of Health Research sees CTUs 'as an important component of any research application and funded project' and you are expected to inform them whether you have contacted a CTU in any grant application. They also provide a useful schematic of the necessary steps to take when planning a definitive trial ( http://www.ct-toolkit.ac.uk ).

This paper has explored the key design el ements of RCTs. Although there are significant challenges encountered when designing such complex studies, thinking through each component described above will provide clarity and hopefully encourage more GDPs to get involved in research. 25 This is important as there is an increasing need for high-quality evidence from primary care settings to guide the delivery of future healthcare.

Pawson R . The science of evaluation . London: Sage Publications, 2013.

Google Scholar  

Tickle M, O'Neill C, Donaldson M et al. A randomised controlled trial to measure the effects and costs of a dental caries prevention regime for young children attending primary care dental services: the Northern Ireland Caries Prevention In Practice (NIC-PIP) trial. Health Technol Assess 2016; 20 : 1–96.

Article   PubMed   PubMed Central   Google Scholar  

Clarkson J E, Ramsay C R, Averley P et al. IQuaD dental trial; improving the quality of dentistry: a multicentre randomised controlled trial comparing oral hygiene advice and periodontal instrumentation for the prevention and management of periodontal disease in dentate adults attending dental primary care. BMC Oral Health 2013; 13 : 58.

Innes N P T, Clarkson J E, Speed C, Douglas G V, Maguire A, FiCTION Trial Collaboration. The FiCTION dental trial protocol filling children's teeth: indicated or not? BMC Oral Health 2013; 13 : 25.

Hesse D, de Araujo M P, Olegário I C, Innes N, Raggio D P, Bonifácio C C . Atraumatic restorative treatment compared to the Hall Technique for occluso-proximal cavities in primary molars: study protocol for a randomized controlled trial. Trials 2016; 17 : 169.

Schulz K F, Altman D G, Moher D, for the CONSORT Group. CONSORT 2010 Statement: updated guidelines for reporting parallel group randomised trials. BMJ 2010; 340 : c332.

Bangor University. NWORTH remote randomisation system. Available at http://nworth-ctu.bangor.ac.uk/randomisation/index.php.en (accessed January 2017).

Hoare Z, Whitaker C J, Whitaker R . Introduction to a generalized method for adaptive randomization in trials. Trials 2013; 14 : 19.

Russell D, Hoare Z S J, Whitaker R, Whitaker C J, Russell I T . Generalized method for adaptive randomization in clinical trials. Stat Med 2011; 30 : 922–934.

PubMed   Google Scholar  

Fisher R A . Presidential Address by Professor RA Fisher. Sankhyā. Ind J Statistics 1938; 4 : 14–17.

Arain M, Campbell M J, Cooper C L, Lancaster G A . What is a pilot or feasibility study? A review of current practice and editorial policy. BMC Med Res Methodol 2010; 10 : 67.

Lancaster G A . Pilot and feasibility studies come of age! Pilot Feasibility Stud 2015; 1 : 1.

Eldridge S M, Lancaster G A, Campbell M J et al. Defining feasibility and pilot studies in preparation for randomised controlled trials: development of a conceptual framework. PLoS ONE 2016; 11 : e0150205.

Lam A Y, Jorm A F, Wong D F . Mental health first aid training for the Chinese community in Melbourne, Australia: effects on knowledge about and attitudes toward people with mental illness. Int J Mental Health Systems 2010; 4 : 18.

Article   Google Scholar  

Chan A W, Tetzlaff J M, Altman D G . SPIRIT 2013 Statement: Defining standard protocol items for clinical trials. Ann Intern Med 2013; 158 : 200–207.

Brady M C, Stott D, Weir C J et al. Clinical and cost effectiveness of enhanced oral healthcare in stroke care settings (SOCLE II): a pilot, stepped wedge, cluster randomized, controlled trial protocol. Int J Stroke 2015; 10 : 979–84.

Article   PubMed   Google Scholar  

Brown C A, Lilford R J . The stepped wedge trial design: a systematic review. BMC Med Res Methodol 2006; 6 : 54.

Woods B, Russell I . Randomisation and chance-based designs in social care research. Available at http://sscr.nihr.ac.uk/PDF/MR/MR17.pdf (accessed January 2017).

Schumi J, Wittes J T . Through the looking glass: understanding non-inferiority. Trials 2011; 12 : 106.

Ellis P D . The essential guide to effect sizes: Statistical power, meta-analysis, and the interpretation of research results . Cambridge: Cambridge University Press, 2010.

Book   Google Scholar  

Chan A W, Altman D G . Identifying outcome reporting bias in randomised trials on PubMed: review of publications and survey of authors. BMJ 2005; 330 : 753.

Goldacre B . Bad science . Harper Collins: London, 2008.

UKCRC. Clinical Trials Units. Available at http://www.ukcrc-ctu.org.uk/?page=CTURole (accessed January 2017).

Legislation.gov. UK Medicines for Human Use (Clinical Trials) Regulations. Available at http://www.legislation.gov.uk/uksi/2004/1031/contents/made (accessed January 2017).

Martin-Kerry J M, Lamont T J, Keightley A et al. Practical considerations for conducting dental clinical trials in primary care. Br Dent J 2015; 218 : 629–634.

Download references

Author information

Authors and affiliations.

Director of NWORTH Clinical Trials Unit and Honorary Consultant in Dental Public Health,

P. Brocklehurst

Principal Statistician, NWORTH Clinical Trials Unit,

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to P. Brocklehurst .

Additional information

Refereed Paper

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Brocklehurst, P., Hoare, Z. How to design a randomised controlled trial. Br Dent J 222 , 721–726 (2017). https://doi.org/10.1038/sj.bdj.2017.411

Download citation

Accepted : 03 February 2017

Published : 12 May 2017

Issue Date : 12 May 2017

DOI : https://doi.org/10.1038/sj.bdj.2017.411

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Cost-effectiveness analysis of a chronic back pain multidisciplinary biopsychosocial rehabilitation (mbr) compared to standard care for privately insured in germany.

  • M. Hochheim

BMC Health Services Research (2021)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

designing a research project randomized controlled trials and their principles

Book cover

Clinical Epidemiology pp 193–211 Cite as

Randomized Controlled Trials 1: Design

  • Bryan M. Curtis 4 ,
  • Brendan J. Barrett 4 &
  • Patrick S. Parfrey 4  
  • First Online: 20 April 2021

2263 Accesses

1 Citations

Part of the book series: Methods in Molecular Biology ((MIMB,volume 2249))

Today’s clinical practice relies on the application of well-designed clinical research, the gold standard test of an intervention being the randomized controlled trial. Principles of the randomized control trial include emphasis on the principal research question, randomization, blinding; definitions of outcome measures, of inclusion and exclusion criteria, and of co-morbid and confounding factors; enrolling an adequate sample size; planning data management and analysis; preventing challenges to trial integrity such as drop-out, drop-in, and bias. The application of pre-trial planning is stressed to ensure the proper application of epidemiological principles resulting in clinical studies that are feasible and generalizable. In addition, funding strategies and trial team composition are discussed.

  • Clinical trial
  • randomization
  • sample size estimate

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Springer Nature is developing a new tool to find and evaluate Protocols.  Learn more

Umscheid CA, Margolis DJ, Grossman CE (2011) Key concepts of clinical trials: a narrative review. Postgrad Med 123:194–204

Article   PubMed   PubMed Central   Google Scholar  

Fuchs FD, Klag MJ, Whelton PK (2000) The classics: a tribute to the fiftieth anniversary of the randomized clinical trial. J Clin Epidemiol 53:335–342

Article   CAS   PubMed   Google Scholar  

Gross CP, Krumholz HM, Van Wye G, Emanuel EJ, Wendler D (2006) Does random treatment assignment cause harm to research participants? PLoS Med 3:e188

Robinson EJ, Kerr C, Stevens A, Lilford R, Braunholtz D, Edwards S (2004) Lay conceptions of the ethical and scientific justifications for random allocation in clinical trials. Soc Sci Med 58:811–824

Article   PubMed   Google Scholar  

Kerr C, Robinson E, Stevens A, Braunholtz D, Edwards S, Lilford R (2004) Randomisation in trials: do potential trial participants understand it and find it acceptable? J Med Ethics 30:80–84

Article   CAS   PubMed   PubMed Central   Google Scholar  

Bonell C, Fletcher A, Morton M, Lorenc T, Moore L (2012) Realist randomised controlled trials: a new approach to evaluating complex public health interventions. Soc Sci Med 75:2299–2306

Whelton PK (1994) Epidemiology of hypertension. Lancet 344:101–106

Whelton PK, Gordis L (2000) Epidemiology of clinical medicine. Epidemiol Rev 22:140–144

Schulz KF, Grimes DA (2002) Allocation concealment in randomised trials: defending against deciphering. Lancet 359:614–618

Parker MJ, Manan A, Duffett M (2012) Rapid, easy, and cheap randomization: prospective evaluation in a study cohort. Trials 13:90

Ivers NM, Halperin IJ, Barnsley J, Grimshaw JM, Shah BR, Tu K, Upshur R, Zwarenstein M (2012) Allocation techniques for balance at baseline in cluster randomized trials: a methodological review. Trials 13:120

Weijer C, Grimshaw JM, Taljaard M, Binik A, Boruch R, Brehaut JC, Donner A, Eccles MP, Gallo A, McRae AD, Saginur R, Zwarenstein M (2011) Ethical issues posed by cluster randomized trials in health research. Trials 12:100

Woertman W, de Hoop E, Moerbeek M, Zuidema SU, Gerritsen DL, Teerenstra S (2013) Stepped wedge designs could reduce the required sample size in cluster randomized trials. J Clin Epidemiol 66:752–758

Yusuf S, Sleight P, Pogue J, Bosch J, Davies R, Dagenais G (2000) Effects of an angiotensin-converting-enzyme inhibitor, ramipril, on cardiovascular events in high-risk patients. The Heart Outcomes Prevention Evaluation Study Investigators. N Engl J Med 342:145–153

Yusuf S, Dagenais G, Pogue J, Bosch J, Sleight P (2000) Vitamin E supplementation and cardiovascular events in high-risk patients. The Heart Outcomes Prevention Evaluation Study Investigators. N Engl J Med 342:154–160

Scott CT, Baker M (2007) Overhauling clinical trials. Nat Biotechnol 25:287–292

Hoare ZS, Whitaker CJ, Whitaker R (2013) Introduction to a generalized method for adaptive randomization in trials. Trials 14:19

Brown CH, Ten Have TR, Jo B, Dagne G, Wyman PA, Muthen B, Gibbons RD (2009) Adaptive designs for randomized trials in public health. Annu Rev Public Health 30:1–25

Ford I, Norrie J (2016) Pragmatic Trials. N Engl J Med 375:454–463

Loudon K, Treweek S, Sullivan F, Donnan P, Thorpe KE, Zwarenstein M (2015) The PRECIS-2 tool: designing trials that are fit for purpose. BMJ 350:h2147

Dickert NW, Wendler D, Devireddy CM, Goldkind SF, Ko YA, Speight CD, Kim SY (2018) Understanding preferences regarding consent for pragmatic trials in acute care. Clin Trials 15:567–578

Abdel-Kader K, Jhamb M (2020) EHR-based clinical trials: the next generation of evidence. Clin J Am Soc Nephrol

Google Scholar  

Li G, Sajobi TT, Menon BK, Korngut L, Lowerison M, James M, Wilton SB, Williamson T, Gill S, Drogos LL, Smith EE, Vohra S, Hill MD, Thabane L (2016) Registry-based randomized controlled trials- what are the advantages, challenges, and areas for future research? J Clin Epidemiol 80:16–24

Mills EJ, Chan AW, Wu P, Vail A, Guyatt GH, Altman DG (2009) Design, analysis, and presentation of crossover trials. Trials 10:27

Avins AL, Cherkin DC, Sherman KJ, Goldberg H, Pressman A (2012) Should we reconsider the routine use of placebo controls in clinical research? Trials 13:44

D’Agostino RB Sr, Massaro JM, Sullivan LM (2003) Non-inferiority trials: design concepts and issues - the encounters of academic consultants in statistics. Stat Med 22:169–186

Snapinn SM (2000) Noninferiority trials. Curr Control Trials Cardiovasc Med 1:19–21

Chan AW, Tetzlaff JM, Gotzsche PC, Altman DG, Mann H, Berlin JA, Dickersin K, Hrobjartsson A, Schulz KF, Parulekar WR, Krleza-Jeric K, Laupacis A, Moher D (2013) SPIRIT 2013 explanation and elaboration: guidance for protocols of clinical trials. BMJ 346:e7586

Lei H, Nahum-Shani I, Lynch K, Oslin D, Murphy SA (2012) A “SMART” design for building individualized treatment sequences. Annu Rev Clin Psychol 8:21–48

Bakris GL, Whelton P, Weir M, Mimran A, Keane W, Schiffrin E (2000) The future of clinical trials in chronic renal disease: outcome of an NIH/FDA/Physician Specialist Conference. Evaluation of Clinical Trial Endpoints in Chronic Renal Disease Study Group. J Clin Pharmacol 40:815–825

Ponticelli C, Zucchelli P, Passerini P, Cesana B, Locatelli F, Pasquali S, Sasdelli M, Redaelli B, Grassi C, Pozzi C, Bizzarri D, Banfi G (1995) A 10-year follow-up of a randomized study with methylprednisolone and chlorambucil in membranous nephropathy. Kidney Int 48:1600–1604

Hernan MA, Robins JM (2017) Per-protocol analyses of pragmatic trials. N Engl J Med 377:1391–1398

Briel M, Lane M, Montori VM, Bassler D, Glasziou P, Malaga G, Akl EA, Ferreira-Gonzalez I, Alonso-Coello P, Urrutia G, Kunz R, Culebro CR, da Silva SA, Flynn DN, Elamin MB, Strahm B, Murad MH, Djulbegovic B, Adhikari NK, Mills EJ, Gwadry-Sridhar F, Kirpalani H, Soares HP, Abu Elnour NO, You JJ, Karanicolas PJ, Bucher HC, Lampropulos JF, Nordmann AJ, Burns KE, Mulla SM, Raatz H, Sood A, Kaur J, Bankhead CR, Mullan RJ, Nerenberg KA, Vandvik PO, Coto-Yglesias F, Schunemann H, Tuche F, Chrispim PP, Cook DJ, Lutz K, Ribic CM, Vale N, Erwin PJ, Perera R, Zhou Q, Heels-Ansdell D, Ramsay T, Walter SD, Guyatt GH (2009) Stopping randomized trials early for benefit: a protocol of the Study Of Trial Policy Of Interim Truncation-2 (STOPIT-2). Trials 10:49

Whitehead J (2004) Stopping rules for clinical trials. Control Clin Trials 25:69–70. author reply 71–2

Besarab A, Bolton WK, Browne JK, Egrie JC, Nissenson AR, Okamoto DM, Schwab SJ, Goodkin DA (1998) The effects of normal as compared with low hematocrit values in patients with cardiac disease who are receiving hemodialysis and epoetin. N Engl J Med 339:584–590

Sweetman EA, Doig GS (2011) Failure to report protocol violations in clinical trials: a threat to internal validity? Trials 12:214

Farrell B, Kenyon S, Shakur H (2010) Managing clinical trials. Trials 11:78

Kasenda B, von Elm EB, You J, Blumle A, Tomonaga Y, Saccilotto R, Amstutz A, Bengough T, Meerpohl J, Stegert M, Tikkinen KA, Neumann I, Carrasco-Labra A, Faulhaber M, Mulla S, Mertz D, Akl EA, Bassler D, Busse JW, Ferreira-Gonzalez I, Lamontagne F, Nordmann A, Rosenthal R, Schandelmaier S, Sun X, Vandvik PO, Johnston BC, Walter MA, Burnand B, Schwenkglenks M, Bucher HC, Guyatt GH, Briel M (2012) Learning from failure—rationale and design for a study about discontinuation of randomized trials (DISCO study). BMC Med Res Methodol 12:131

Sully BG, Julious SA, Nicholl J (2014) An investigation of the impact of futility analysis in publicly funded trials. Trials 15:61

Brenner BM, Cooper ME, de Zeeuw D, Keane WF, Mitch WE, Parving HH, Remuzzi G, Snapinn SM, Zhang Z, Shahinfar S (2001) Effects of losartan on renal and cardiovascular outcomes in patients with type 2 diabetes and nephropathy. N Engl J Med 345:861–869

Scherer RW, Langenberg P, von Elm E (2007) Full publication of results initially presented in abstracts. Cochrane Database Syst Rev 2:MR000005

Moher D, Hopewell S, Schulz KF, Montori V, Gotzsche PC, Devereaux PJ, Elbourne D, Egger M, Altman DG (2010) CONSORT 2010 explanation and elaboration: updated guidelines for reporting parallel group randomised trials. BMJ 340:c869

Plint AC, Moher D, Morrison A, Schulz K, Altman DG, Hill C, Gaboury I (2006) Does the CONSORT checklist improve the quality of reports of randomised controlled trials? A systematic review. Med J Aust 185:263–267

Download references

Author information

Authors and affiliations.

Department of Medicine, Faculty of Medicine, Memorial University of Newfoundland, St. John, NL, Canada

Bryan M. Curtis, Brendan J. Barrett & Patrick S. Parfrey

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Bryan M. Curtis .

Editor information

Editors and affiliations.

Clinical Epidemiology Unit, Faculty of Medicine, Memorial University of Newfoundland, St. John’s, NL, Canada

Patrick S. Parfrey

Brendan J. Barrett

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Science+Business Media, LLC, part of Springer Nature

About this protocol

Cite this protocol.

Curtis, B.M., Barrett, B.J., Parfrey, P.S. (2021). Randomized Controlled Trials 1: Design. In: Parfrey, P.S., Barrett, B.J. (eds) Clinical Epidemiology. Methods in Molecular Biology, vol 2249. Humana, New York, NY. https://doi.org/10.1007/978-1-0716-1138-8_11

Download citation

DOI : https://doi.org/10.1007/978-1-0716-1138-8_11

Published : 20 April 2021

Publisher Name : Humana, New York, NY

Print ISBN : 978-1-0716-1137-1

Online ISBN : 978-1-0716-1138-8

eBook Packages : Springer Protocols

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Europe PMC requires Javascript to function effectively.

Either your web browser doesn't support Javascript or it is currently turned off. In the latter case, please turn on Javascript support in your web browser and reload this page.

Search life-sciences literature (43,861,927 articles, preprints and more)

  • Full text links
  • Citations & impact
  • Similar Articles

Designing a research project: randomised controlled trials and their principles.

Author information, affiliations.

  • Kendall JM 1

Emergency Medicine Journal : EMJ , 01 Mar 2003 , 20(2): 164-168 https://doi.org/10.1136/emj.20.2.164   PMID: 12642531  PMCID: PMC1726034

Review Free full text in Europe PMC

Abstract 

Free full text .

Logo of emermedj

Designing a research project: randomised controlled trials and their principles

Selected references, these references are in pubmed. this may not be the complete list of references from this article..

  • Sibbald B, Roland M. Understanding controlled trials. Why are randomised controlled trials important? BMJ. 1998 Jan 17; 316 (7126):201–201. [ Europe PMC free article ] [ Abstract ] [ Google Scholar ]
  • Begg C, Cho M, Eastwood S, Horton R, Moher D, Olkin I, Pitkin R, Rennie D, Schulz KF, Simel D, et al. Improving the quality of reporting of randomized controlled trials. The CONSORT statement. JAMA. 1996 Aug 28; 276 (8):637–639. [ Abstract ] [ Google Scholar ]
  • Altman DG. Randomisation. BMJ. 1991 Jun 22; 302 (6791):1481–1482. [ Europe PMC free article ] [ Abstract ] [ Google Scholar ]
  • Schulz KF, Chalmers I, Hayes RJ, Altman DG. Empirical evidence of bias. Dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA. 1995 Feb 1; 273 (5):408–412. [ Abstract ] [ Google Scholar ]
  • Altman DG. Better reporting of randomised controlled trials: the CONSORT statement. BMJ. 1996 Sep 7; 313 (7057):570–571. [ Europe PMC free article ] [ Abstract ] [ Google Scholar ]

Full text links 

Read article at publisher's site: https://doi.org/10.1136/emj.20.2.164

Citations & impact 

Impact metrics, citations of article over time, alternative metrics.

Altmetric item for https://www.altmetric.com/details/1029327

Smart citations by scite.ai Smart citations by scite.ai include citation statements extracted from the full text of the citing article. The number of the statements may be higher than the number of citations provided by EuropePMC if one paper cites another multiple times or lower if scite has not yet processed some of the citing articles. Explore citation contexts and check if this article has been supported or disputed. https://scite.ai/reports/10.1136/emj.20.2.164

Article citations, comprehensive evaluation of 45 augmentation drugs for schizophrenia: a network meta-analysis..

Etchecopar-Etchart D , Yon DK , Wojciechowski P , Aballea S , Toumi M , Boyer L , Fond G

EClinicalMedicine , 69:102473, 07 Feb 2024

Cited by: 0 articles | PMID: 38356727 | PMCID: PMC10864200

Trials Methodology Research: what is it and why should India invest in it?

Paramasivan S , Bhan A , Rodrigues R , Menon U

Lancet Reg Health Southeast Asia , 22:100360, 02 Feb 2024

Cited by: 0 articles | PMID: 38482154 | PMCID: PMC10934319

Can an educational intervention in the context of inpatient pulmonary rehabilitation improve asthma self-management at work? A study protocol of a randomized controlled trial.

Salandi J , Hayden MC , Heinrichs K , Limbach M , Schultz K , Schwarzl G , Neumeister W , Loerbroks A

BMC Pulm Med , 24(1):40, 17 Jan 2024

Cited by: 0 articles | PMID: 38233800 | PMCID: PMC10795332

Doing research in non-specialist mental health services for children and young people: lessons learnt from a process evaluation of the ICALM (Interpersonal Counselling for Adolescent Low Mood) feasibility randomised controlled trial.

Katangwe-Chigamba T , Murdoch J , Wilkinson P , Cestaro V , Seeley C , Charami-Roupa E , Clarke T , Dunne A , Gee B , Jarrett S , Laphan A , McIvor S , Meiser-Stedman R , Rhodes T , Shepstone L , Turner DA , Wilson J

Pilot Feasibility Stud , 10(1):14, 23 Jan 2024

Cited by: 0 articles | PMID: 38263254 | PMCID: PMC10804551

An Overview of the Methods Used to Measure the Impact of Mindfulness-Based Interventions in Sleep-Related Outcomes.

Vallim JRDS , Lima GS , Pires GN , Tufik S , Demarzo M , D'Almeida V

Sleep Sci , 16(4):e476-e485, 22 Nov 2023

Cited by: 0 articles | PMID: 38197031 | PMCID: PMC10773519

Similar Articles 

To arrive at the top five similar articles we use a word-weighted algorithm to compare words from the Title and Abstract of each citation.

Harms of outcome switching in reports of randomised trials: CONSORT perspective.

Altman DG , Moher D , Schulz KF

BMJ , 356:j396, 14 Feb 2017

Cited by: 11 articles | PMID: 28196813

Critical care services and trial methodology: even at the turning of the tide.

Graf J , Janssens U

Crit Care Med , 35(6):1620-1622, 01 Jun 2007

Cited by: 3 articles | PMID: 17522539

Reporting on blinding in trial protocols and corresponding publications was often inadequate but rarely contradictory.

Hróbjartsson A , Pildal J , Chan AW , Haahr MT , Altman DG , Gøtzsche PC

J Clin Epidemiol , 62(9):967-973, 01 Sep 2009

Cited by: 30 articles | PMID: 19635403

The matching quality of experimental and control interventions in blinded pharmacological randomised clinical trials: a methodological systematic review.

Bello S , Wei M , Hilden J , Hróbjartsson A

BMC Med Res Methodol , 16:18, 13 Feb 2016

Cited by: 9 articles | PMID: 26873063 | PMCID: PMC4752749

Factors that limit the quality, number and progress of randomised controlled trials.

Prescott RJ , Counsell CE , Gillespie WJ , Grant AM , Russell IT , Kiauka S , Colthart IR , Ross S , Shepherd SM , Russell D

Health Technol Assess , 3(20):1-143, 01 Jan 1999

Cited by: 164 articles | PMID: 10683591

Europe PMC is part of the ELIXIR infrastructure

RESEARCH SERIES Emerg Med J: first published as 10.1136/emj.20.2.164 on 1 March 2003. Downloaded from Designing a research project: randomised controlled trials and their principles J M Kendall ......

Emerg Med J 2003;20:164–168 The sixth paper in this series discusses the design and observed relation between the intervention and principles of randomised controlled trials. the likelihood of experiencing the outcome would be confounded by age...... Chance is a random error appearing to cause an association between an intervention and an he randomised control trial (RCT) is a trial in outcome. The most important design strategy to which subjects are randomly assigned to one minimise random error is to have a large sample Tof two groups: one (the experimental group) size. receiving the intervention that is being tested, These errors have an important impact on the and the other (the comparison group or control) interpretation and generalisability of the results receiving an alternative (conventional) treatment of a research project. The beauty of a well planned (fig 1). The two groups are then followed up to see RCT is that these errors can all be effectively if there are any differences between them in out- reduced or designed out (see box 1). The come. The results and subsequent analysis of the appropriate design strategies will be discussed trial are used to assess the effectiveness of the below. intervention, which is the extent to which a treatment, procedure, or service does patients GETTING STARTED: DEVELOPING A more good than harm. RCTs are the most stringent way of determining whether a cause- PROTOCOL FROM THE INITIAL effect relation exists between the intervention HYPOTHESIS and the outcome.1 Analytical studies need a hypothesis that specifies This paper discusses various key features of an anticipated association between predictor and RCT design, with particular emphasis on the outcome variables (or no association, as in a null hypothesis), so that statistical tests of significance validity of findings. There are many potential 3 errors associated with health services research, can be performed. Good hypotheses are specific http://emj.bmj.com/ but the main ones to be considered are bias, con- and formulated in advance of commencement (a founding, and chance.2 priori) of the study. Having chosen a subject to Bias is the deviation of results from the truth, research and specifically a hypothesis to be tested, due to systematic error in the research method- ology. Bias occurs in two main forms: (a) selection bias, which occurs when the two groups being Box 1 Features of a well designed RCT studied differ systematically in some way, and (b)

observer/information bias, which occurs when there • The sample to be studied will be appropriate on September 29, 2021 by guest. Protected copyright. are systematic differences in the way information to the hypothesis being tested so that any is being collected for the groups being studied. results are appropriately generalisable. The study will recruit sufficient patients to allow it A confounding factor is some aspect of a subject to have a high probability of detecting a clini- that is associated both with the outcome of inter- caly important difference between treatments est and with the intervention of interest. For if a difference truly exists. example, if older people are less likely to receive a • There will be effective (concealed) random- new treatment, and are also more likely for unre- isation of the subjects to the intervention/ lated reasons to experience the outcome of inter- control groups (to eliminate selection bias and est, (for example, admission to hospital), then any minimise confounding variables). • Both groups will be treated identically in all respects except for the intervention being tested and to this end patients and investiga- tors will ideally be blinded to which group an individual is assigned. • The investigator assessing outcome will be blinded to treatment allocation. • Patients are analysed within the group to which they were allocated, irrespective of ...... whether they experienced the intended inter- vention (intention to treat analysis). Correspondence to: Dr J M Kendall, North • Analysis focuses on testing the research ques- Bristol NHS Trust, Frenchay tion that initialy led to the trial (that is, accord- Hospital, Frenchay Park ing to the a priori hypothesis being tested), road, Bristol BS16 1LE, UK; rather than “trawling” to find a significant [email protected] difference...... Figure 1 The randomised control trial.

www.emjonline.com Designing a research project 165 preparation should be thorough and is best documented in the Summary: population sampling Emerg Med J: first published as 10.1136/emj.20.2.164 on 1 March 2003. Downloaded from form of a protocol that will outline the proposed methodology. This will start with a statement of the hypothesis to be tested, • The study sample must be representative of the target popu- for example: “...that drug A is more efficacious in reducing the lation for the findings of the study to be generalisable. diastolic blood pressure than drug B in patients with moderate • Inclusion and exclusion criteria will determine who will be essential hypertension.”An appropriate rationale for the study studied from within the accessible population. will follow with a relevant literature review, which is focused • The most appropriate sampling strategy is normally on any existing evidence relating to the condition or interven- consecutive sampling, although stratified sampling may tions to be studied. legitimately be required. The subject to be addressed should be of clinical, social, or • A sample size calculation and pilot study will permit appro- economic significance to afford relevance to the study, and the priate planning in terms of time and money for the recruit- hypothesis to be evaluated must contain outcomes that can be ment phase of the main study. • Follow CONSORT guidelines on population sampling.6 accurately measured. The subsequent study design (popula- tion sampling, randomisation, applying the intervention, out- come measures, analysis, etc) will need to be defined to permit a true evaluation of the hypothesis being tested. In practice, will have difficulty complying with the required regimens, this will be the best compromise between what is ideal and those who cannot provide informed consent, etc. what is practical. In designing the inclusion criteria, the investigator should Writing a thorough and comprehensive protocol in the consider the outcome to be measured; if this is comparatively planning stage of the research project is essential. Peer review rare in the population as a whole, then it would be appropri- of a written protocol allows others to criticise the methodology ate to recruit at random or consecutively from populations at constructively at a stage when appropriate modification is high risk of the condition in question (stratified sampling). The possible. Seeking advice from experienced researchers, par- subsamples in a stratified sample will draw disproportionately ticularly involving a local research and development support from groups that are less common in the population as a unit, or some other similar advisory centre, can be very whole, but of particular relevance to the investigator. beneficial. It is far better to identify and correct errors in the Other forms of sampling where subjects are recruited who protocol at the design phase than to try to adjust for them in are easily accessible or appropriate, (convenience or judgmental the analysis phase. Manuscripts rarely get rejected for sampling) will have advantages in terms of cost, time, and publication because of inappropriate analysis, which is reme- logistics, but may produce a sample that is not representative diable, but rather because of design flaws. of the target population and it is likely to be dificult to define There are several steps in performing an RCT, all of which exactly who has and has not been included. need to be considered while developing a protocol. The first is Having determined an appropriate sample to recruit, it is to choose an appropriate (representative) sample of the popu- necessary to estimate the size of the sample required to allow lation from which to recruit. Having measured relevant base- the study to detect a clinically important difference between line variables, the next task is to randomise subjects into one the groups being compared. This is performed by means of a 5 of two (or more) groups, and subsequently to perform the sample size calculation. As clinicians, we must be able to specify intervention as appropriate to the assignment of the subject. what we would consider to be a clinically significant difference The pre-defined outcome measures will then be recorded and in outcome. Given this information, or an estimate of the http://emj.bmj.com/ the findings compared between the two groups, with effect size based on previous experience (from the literature or appropriate quality control measures in place to assure quality from a pilot study), and the design of the study, a statistical data collection. Each of these steps, which can be tested in a adviser will be able to perform an appropriate sample size cal- pilot study, has implications for the design of the trial if the culation. This will determine the required sample size to detect findings are to be valid. They will now be considered in turn. the pre-determined clinically significant difference to a certain degree of power. As previously mentioned, early CHOOSING THE RIGHT POPULATION involvement of an experienced researcher or research support

This part of the design is crucial because poor sampling will unit in the design stage is essential in any RCT. on September 29, 2021 by guest. Protected copyright. undermine the generalisability of the study or, even worse, After deciding on the population to be studied and the reduce the validity if sampling bias is introduced.4 The task sample size required, it will now be possible to plan the appro- begins with deciding what kind of subjects to study and how priate amount of time (and money) required to collect the to go about recruiting them. The target population is that data necessary. A limited pilot of the methods is essential to population to which it is intended to apply the results. It is gauge recruitment rate and address in advance any practical important to set inclusion and exclusion criteria defining tar- issues that may arise once data collection in the definitive get populations that are appropriate to the research hypoth- study is underway. Pilot studies will guide decisions about esis. These criteria are also typically set to make the research- designing approaches to recruitment and outcome measure- ers’ task realistic, for within the target population there must ment. A limited pilot study will give the investigator an idea of be an accessible/appropriate sample to recruit. what the true recruitment rate will be (not just the number of The sampling strategy used will determine whether the subjects available, but also their willingness to participate). It sample actually studied is representative of the target popula- may be even more helpful in identifying any methodological tion. For the findings of the study to be generalisable to the issues related to applying the intervention or measuring out- population as a whole, the sample must be representative of come variables (see below), which can be appropriately the population from which it is drawn. The best design is con- addressed. secutive sampling from the accessible population (taking every patient who meets the selection criteria over the specified time RANDOMISATION: THE CORNERSTONE OF THE period). This may produce an excessively large sample from RCT which, if necessary, a subsample can be randomly drawn. If Various baseline characteristics of the subjects recruited the inclusion criteria are broad, it will be easy to recruit study should be measured at the stage of initial recruitment into the subjects and the findings will be generalisable to a compara- trial. These will include basic demographic observations, such tively large population. Exclusion criteria need to be defined as name, age, sex, hospital identification, etc, but more impor- and will include such subjects who have conditions which tantly should include any important prognostic factors. It will may contraindicate the intervention to be tested, subjects who be important at the analysis stage to show that these potential

www.emjonline.com 166 Kendall confounding variables are equally distributed between the two groups; indeed, it is usual practice when reporting an RCT to Summary: randomisation Emerg Med J: first published as 10.1136/emj.20.2.164 on 1 March 2003. Downloaded from demonstrate the integrity of the randomisation process by • The random assignment of subjects into one of two groups showing that there is no significant difference between base- 6 is the basis for establishing a causal interpretation for an line variables (following CONSORT guidelines). intervention. The random assignment of subjects to one or another of two • Effective randomisation will minimise confounding vari- groups (differing only by the intervention to be studied) is the ables that exist at the time of randomisation. basis for measuring the marginal difference between these • Randomisation must be concealed from the investigator. groups in the relevant outcome. Randomisation should • Blocked randomisation may be appropriate for smaller equally distribute any confounding variables between the two trials to ensure equal numbers in each group. groups, although it is important to be aware that differences in • Stratified randomisation will ensure that a potential confounding variables may arise through chance. baseline confounding variable is equally distributed Randomisation is one of the cornerstones of the RCT7 and a between the two groups. true random allocation procedure should be used. It is also • Analysis of results should occur based on the initial essential that treatment allocations are concealed from the randomisation, irrespective of what may subsequently actu- investigator until recruitment is irrevocable, so that bias ally have happened to the subject (that is, “intention to treat (intentional or otherwise) cannot be introduced at the stage of analysis”). assigning subjects to their groups.8 The production of compu- ter generated sets of random allocations, by a research support unit (who will not be performing data collection) in advance Sample attrition (“drop outs”), once subjects have con- of the start of the study, which are then sealed in consecutively sented and been randomised, may be an important factor. numbered opaque envelopes, is an appropriate method of Patients may refuse to continue with the trial, they may be lost randomisation. Once the patient has given consent to be to analysis for whatever reason, and there may be changes in included in the trial, he/she is then irreversibly randomised by the protocol (or mistakes) subsequent to randomisation, even opening the next sealed envelope containing his/her assign- resulting in the patient receiving the wrong treatment. This is, ment. in fact, not that uncommon: a patient randomised to have a An alternative method, particularly for larger, multicentre minimally invasive procedure may need to progress to an open trials is to have a remote randomisation facility. The clinician operation, for example, or a patient assigned to medical treat- contacts this facility by telephone when he is ready to ment may require surgery at a later stage. In the RCT, the randomise the next patient; the initials and study number of analysis must include an unbiased comparison of the groups the patient are read to the person performing the random- produced by the process of randomisation, based on all the isation, who records it and then reads back the randomisation people who were randomised; this is known as analysis by for that subject. intention to treat. Intention to treat analysis depends on having Studies that involve small to moderate sample sizes (for outcomes for all subjects, so even if patients “drop out”, it is example, less than 50 per group) may benefit from “blocked” important to try to keep them in the trial if only for outcome and/or “stratified” randomisation techniques. These methods measurement. This avoids the introduction of bias as a conse- will balance (where chance alone might not) the groups in quence of potentialy selectively dropping patients from previ- terms of the number of subjects they contain, and in the dis- ously randomised/balanced groups. tribution of potential confounding variables (assuming, of http://emj.bmj.com/ course, that these variables are known before the onset of the trial). They are the design phase alternative to statistically APPLYING THE INTERVENTION AND MEASURING adjusting for confounding variables in the analysis phase, and OUTCOME: THE IMPORTANCE OF BLINDING are preferred if the investigator intends to carry out subgroup After randomisation there will be two (or more) groups, one of analysis (on the basis of the stratification variable). which will receive the test intervention and another (or more) Blocked randomisation is a technique used to ensure that the which receives a standard intervention or placebo. Ideally, nei- number of subjects assigned to each group is equally distrib- ther the study subjects, nor anybody performing subsequent uted. Randomisation is set up in blocks of a pre-determined measurements and data collection, should be aware of the on September 29, 2021 by guest. Protected copyright. set size (for example 6, 8, 10, etc). Randomisation for a block study group assignment. Effective randomisation will elimi- size of 10 would proceed normally until five assignments had nate confounding by variables that exist at the time of been made to one group, and then the remaining assignments randomisation. Without effective blinding, if subject assign- would be to the other group until the block of 10 was ment is known by the investigator, bias can be introduced complete. This means that for a sample size of 80 subjects, because extra attention may be given to the intervention exactly 40 would be assigned to each group. Block size must be 8 group (intended or otherwise). This would introduce blinded from the investigator performing the study and, if the variables into one group not present in the other, which may study is non-blinded, the block sizes should vary randomly ultimately be responsible for any differences in outcome (otherwise the last allocation(s) in a block would, in effect, be observed. Confounding can therefore also occur after random- unconcealed). Stratified randomisation is a technique for ensuring that an isation. Double blinding of the investigator and patient (for important baseline variable (potential confounding factor) is example, by making the test treatment and standard/placebo more evenly distributed between the two groups than chance treatments appear the same) will eliminate this kind of alone might otherwise assure. In examining the effect of a confounding, as any extra attentions should be equally spread treatment for cardiac failure, for example, the degree of exist- between the two groups (with the exception, as for ing cardiac failure will be a baseline variable predicting randomisation, of chance maldistributions). outcome, and so it is important that this is the same in the two While the ideal study design will be double blind, this is groups. To achieve this, the sample can be stratified at baseline often difficult to achieve effectively, and is sometimes not pos- into patients with mild, moderate, or severe cardiac failure, sible (for example, surgical interventions). Where blinding is and then randomisation occurs within each of these “strata”. possible, complex (and costly) arrangements need to be made There is a limited number of baseline variables that can be to manufacture placebo that appears similar to the test drug, balanced by stratification because the numbers of patients to design appropriate and foolproof systems for packaging and within a stratum are reduced. In the above example, to stratify labelling, and to have a system to permit rapid unblinding in also for age, previous infarction, and the co-existence of the event of any untoward event causing the patient to diabetes would be impractical. become unwell. The hospital pharmacy can be invaluable in

www.emjonline.com Designing a research project 167

Ideally, any outcome measurement taken on a patient Summary: intervention and outcome should be precise and reproducible; it should not depend on Emerg Med J: first published as 10.1136/emj.20.2.164 on 1 March 2003. Downloaded from the observer who took the measurement.4 It is well known, for • Blinding at the stage of applying the intervention and example, that some clinicians in their routine medical practice measuring the outcome is essential if bias (intentional or record consistently higher blood pressure values than others. otherwise) is to be avoided. • The subject and the investigator should ideally be blinded to Such interobserver variation in the setting of a clinical trial is the assignment (double blind), but even where this is not clearly unacceptable and steps must be taken to avoid it. It possible, a blinded third party can measure outcome. may be possible, if the trial is not too large, for all • Blinding is achieved by making the intervention and the measurements to be performed by the same observer, in which control appear similar in every respect. case the problem is avoided. However, it is often necessary to • Blinding can break down for various reasons, but this can use multiple observers, especially in multicentre trials. be systematically assessed. Training sessions should be arranged to ensure that observers • Continuous outcome variables have the advantage over (and their equipment) can produce the same measurements dichotomous outcome variables of increasing the power of in any given subject. Repeat sessions may be necessary if the a study, permitting a smaller sample size. trial is of long duration. You should try to use as few observers as possible without exhausting the available staff. The trial should be designed so that any interobserver variability organising these issues. Blinding may break down subse- cannot bias the results by having each observer evaluate quently if the intervention has recognisable side effects. The patients in all treatment groups. effectiveness of the blinding can be systematically tested after Inevitably, there will be a principal investigator; this person the study is completed by asking investigators to guess treat- will be responsible for assuring the quality of data measure- ment assignments; if a significant proportion are able to cor- ment through motivation, appropriate delegation of responsi- rectly guess the assignment, then the potential for this as a bility, and supervision. An investigators’ meeting before the source of bias should be considered. study starts and regular visits to the team members or centres Once the intervention has been applied, the groups will by the principal investigator during data collection, permit need to be followed up and various outcome measures will be communication, supervision, early detection of problems, performed to evaluate the effect or otherwise of that interven- feedback and are good for motivation. tion. The outcome measures to be assessed should be Quality control of data management begins before the start of the study and continues during the study. Forms to be used appropriate to the research question, and must be ones that for data collection should be appropriately designed to can be measured accurately and precisely. Continuous encourage the collection of good quality data. They should be outcome variables (quantified on an infinite arithmetic scale, user friendly, self explanatory, clearly formatted, and collect for example, time) have the advantage over dichotomous out- only data that is needed. They can be tested in the pilot. Data come variables (only two categories, for example, dead or will subsequently need to be transcribed onto a computer alive) of increasing the power of a study, permitting a smaller database from these forms. The database should also be set up sample size. It may be desirable to have several outcome so that it is similar in format to the forms, allowing for easy measures evaluating different aspects of the results of the transcription of information. The database can be pre- intervention. It is also necessary to design outcome measures prepared to accept only variables within given permissible that will detect the occurrence of specified adverse effects of ranges and that are consistent with previous entries and to the intervention. alert the user to missing values. Ideally, data should be entered http://emj.bmj.com/ It is important to emphasise, as previously mentioned, that in duplicate, with the database only accepting data that are the person measuring the outcome variables (as well as the concordant with the first entry; this, however, is time person applying the intervention) should be blinded to the consuming, and it may be adequate to check randomly treatment group of the subject to prevent the introduction of selected forms with a printout of the corresponding datasheet bias at this stage, particularly when the outcome variable to ensure transcription error is minimal, acting appropriately requires any judgement on the part of the observer. Even if it if an unacceptably high number of mistakes are discovered. has not been possible to blind the administration of the inter- Once the main phase of data collection has begun, you on September 29, 2021 by guest. Protected copyright. vention, it should be possible to design the study so that out- should try to make as few changes to the protocol as possible. come measurement is performed by someone who is blinded In an ideal world, the pilot study will have identified any to the original treatment assignment. issues that will require a modification of the protocol, but inevitably some problem, minor or major, will arise once the QUALITY CONTROL study has begun. It is better to leave any minor alterations that are considered “desirable” but not necessary and resist the A critical aspect of clinical research is quality control. Quality inclination to make changes. Sometimes, more substantive control is often overlooked during data collection, a potentially issues are highlighted and protocol modification is necessary tedious and repetitive phase of the study, which may lead sub- to strengthen the study. These changes should be documented sequently to errors because of missing or inaccurate measure- and disseminated to all the investigators (with appropriate ments. Essentially, quality control issues occur in clinical pro- changes made to the operations manual and any re-training cedures, measuring outcomes, and handling data. Quality performed as necessary). The precise date that the revision is control begins in the design phase of the study when the pro- implemented is noted, with a view to separate analysis of data tocol is being written and is first evaluated in the pilot study, collected before and after the revision, if this is considered which will be invaluable in testing the proposed sampling necessary by the statistical advisor. Such revisions to the pro- strategy, methods for data collection and subsequent data tocol should only be undertaken if, after careful consideration, handling. it is felt that making the alteration will significantly improve Once the methods part of the protocol is finalised, an the findings, or not changing the protocol will seriously jeop- operations manual can be written that specifically defines how ardise the project. These considerations have to be balanced to recruit subjects, perform measurements, etc. This is essen- against the statistical difficulty in analysis after protocol revi- tial when there is more than one investigator, as it will stand- sion. ardise the actions of all involved. After allowing all those involved to study the operations manual, there will be the ...SOME FINAL THOUGHTS opportunity to train (and subsequently certify) investigators A well designed, methodologically sound RCT evaluating an to perform various tasks uniformly. intervention provides strong evidence of a cause-effect

www.emjonline.com 168 Kendall

the available evidence. Writing up the RCT at completion, like Summary: quality control Emerg Med J: first published as 10.1136/emj.20.2.164 on 1 March 2003. Downloaded from developing the protocol at inception, should be thorough and 9 6 • An inadequate approach to quality control will lead to detailed (following CONSORT guidelines ), with emphasis potentially significant errors due to missing or inaccurate not just on findings, but also on methodology. Potential limi- results. tations or sources of error should be discussed so that the • An operations manual will allow standardisation of all pro- readership can judge for themselves the validity and general- cedures to be performed. isability of the research.10 • To reduce interobserver variability in outcome measure- ment, training can be provided to standardise procedures Further reading in accordance with the operations manual. • Data collection forms should be user friendly, self explana- Sackett DL, Haynes RB, Guyatt GH, et al. Clinical epidemiology : a tory, and clearly formatted, with only truly relevant data basic science for clinical medicine. 2nd edn. Toronto: Little, Brown, being collected. 1991. • Subsequent data transfer onto a computerised database Sackett DL, Richardson WS, Rosenberg W, et al. Evidence-based can be safe guarded with various measures to reduce tran- scription errors. medicine: how to practice and teach EBM. Edinburgh: Churchill • Protocol revisions after study has started should be avoided Livingstone, 1997. if at all possible, but, if necessary, should be appropriately Polgar S. Introduction to research in health sciences. 2nd edn. Edin- documented and dated to permit separate analysis. burgh: Churchill Livingstone, 1991. Bland M. An introduction to medical statistics . Oxford: Oxford Medical Publications, 1987. relation if one exists; it is therefore powerful in changing practice to improve patient outcome, this being the ultimate goal of research on therapeutic effectiveness. Conversely, poorly designed studies are dangerous because of their poten- REFERENCES tial to influence practice based on flawed methodology. As dis- 1 Sibbald B, Roland M. Understanding controlled trials: Why are randomised controlled trials important? BMJ 1998;316:201. cussed above, the validity and generalisability of the findings 2 Pocock SJ. Clinical trials: a practical approach. Chichester: Wiley, are dependent on the study design. 1984. Early involvement of the local research support unit is 3 Hulley SB, Cunnings SR. Designing clinical research—an epidemiological approach. Chicago: Williams and Wilkins, 1988. essential in developing a protocol. Subsequent peer review and 4 Bowling A. Researching methods in health: investigating health and ethical committee review will ensure that it is well designed, health services. Buckingham: Open University Press, 1997. and a successful pilot will ensure that the research goals are 5 Lowe D. Planning for medical research: a practical guide to research methods. Middlesborough: Astraglobe, 1993. practical and achievable. 6 Begg C, Cho M, Eastwood S, et al. Improving the quality of reporting of Delegate tasks to those who have the expertise; for example, randomised controlled trials: the CONSORT statement. JAMA allow the research support unit to perform the randomisation, 1996;276:637–9. leave the statistical analysis to a statistician , and let a health 7 Altman DG. Randomisation. BMJ 1991;302:1481–2. 8 Schultz KF, Chalmers I, Hayes RJ, et al. Empirical evidence of bias: economist advise on any cost analysis. Networking with the dimensions of methodological quality associated with estimates of relevant experts is invaluable in the design phase and will treatment effects in controlled trials. JAMA 1995;273:408–12. contribute considerably to the final credence of the findings. 9 The Standards of Reporting Trials Group. A proposal for structured reporting of randomised controlled trials. JAMA 1994;272:1926–31. http://emj.bmj.com/ Finally, dissemination of the findings through publication is 10 Altman DG. Better reporting of randomised controlled trials: the the final peer review process and is vital to help others act on CONSORT statement. BMJ 1996;313:570–1. on September 29, 2021 by guest. Protected copyright.

www.emjonline.com

Web Analytics

We have a new app!

Take the Access library with you wherever you go—easy access to books, videos, images, podcasts, personalized features, and more.

Download the Access App here: iOS and Android . Learn more here!

  • Remote Access
  • Save figures into PowerPoint
  • Download tables as PDFs

Principles of Research Design and Drug Literature Evaluation, 2e

Chapter 18:  Evaluating Randomized Controlled Trials

Erin M. Timpe Behnen; McKenzie C. Ferguson

  • Download Chapter PDF

Disclaimer: These citations have been automatically generated based on the information we have and it may not be 100% accurate. Please consult the latest official manual style if you have any questions regarding the format accuracy.

Download citation file:

  • Search Book

Jump to a Section

Chapter objectives, key terminology, introduction, criteria for evaluating and reporting clinical trials.

  • CRITICALLY EVALUATING RANDOMIZED CONTROLLED TRIALS
  • APPLICATION TO CLINICAL PRACTICE
  • SUMMARY AND CONCLUSION
  • REVIEW QUESTIONS
  • ONLINE RESOURCES
  • Full Chapter
  • Supplementary Content

Identify and describe the use of formal criteria to assess the quality of randomized trials

Assess validity issues in randomized trials

Apply general criteria to evaluate methodological rigor in randomized trials

Evaluate common biases in randomized trials

Interpret and apply key findings in clinical practice

Chalmers Scale

Composite endpoint

Consolidated Standards of Reporting Trials (CONSORT)

Construct validity

Jadad scale

Randomized controlled trials (RCTs) can provide the strongest evidence when they are well-designed and conducted. Unfortunately, poor study design and methodology may produce misleading results and clinical evidence that may ultimately impact treatment decisions reaching patients. 1 Several studies have evaluated the conduct and reporting of RCTs and have found that more than half of those analyzed had missing or incomplete key information regarding the methods used for allocation of patients, blinding, reporting a defined primary endpoint, and sample size calculation. 2–6 This highlights the importance of critical evaluation of clinical trials by clinicians.

Treatment considerations are often based on the evidence derived from RCTs. Although these trials are designed to provide the strongest evidence for patient care, any flaw in study design and implementation can undermine the results and ultimately affect the evidence base. Clinicians should be able to identify the flaws in study design and implementation, and further evaluate the impact of flaws on the results. Patient care treatment decisions require a thorough understanding of the evidence in the context of current practice, study design and implementation, and patient-specific considerations. Chapter 4: Randomized Controlled Trials , provided details regarding the conduct of randomized trials. This chapter will identify guidelines for standard reporting of RCTs and will describe scales and checklists available for assessing randomized trials. Factors to consider when evaluating internal and external validity and other issues for critically evaluating RCTs will be outlined and applications to an example article will be included. Finally, considerations for application of RCT findings to patient care will be described and applied.

Standardized Reporting Requirements

Sign in or create a free Access profile below to access even more exclusive content.

With an Access profile, you can save and manage favorites from your personal dashboard, complete case quizzes, review Q&A, and take these feature on the go with our Access app.

Pop-up div Successfully Displayed

This div only appears when the trigger link is hovered over. Otherwise it is hidden from view.

Please Wait

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Randomized Controlled Trials

Emily c. zabor.

a Department of Quantitative Health Sciences & Taussig Cancer Institute, Cleveland Clinic, Cleveland, OH

Alexander M. Kaizer

Brian p. hobbs.

b Department of Biostatistics and Informatics, Colorado School of Public Health, University of Colorado-Anschutz Medical Campus, Aurora, CO

Randomized controlled trials (RCTs) are considered the highest level of evidence to establish causal associations in clinical research. There are many RCT designs and features that can be selected to address a research hypothesis. Designs of RCTs have become increasingly diverse as new methods have been proposed to evaluate increasingly complex scientific hypotheses. This article reviews the principles and general concepts behind many common RCT designs and introduces newer designs that have been proposed, such as adaptive and cluster randomized trials. A focus on the many choices for randomization within an RCT is described, along with their potential tradeoffs. To illustrate their diversity, examples of RCTs from the literature are provided. Statistical considerations, such as power and type I error rates, are discussed with the intention of providing practical guidance about how to specify study hypotheses that address the scientific question while being statistically appropriate. Finally, the freely available Consolidated Standards of Reporting Trials guidelines and US Food and Drug Administration guidance documents are introduced, along with a set of guidelines one should consider when planning an RCT or reviewing RCTs submitted for publication in peer-reviewed academic journals.

General Overview of Study Design

Clinical studies are conducted among human participants to generate new knowledge through describing the impact of interventions devised to improve the diagnosis, prevention, and treatment of human disorders. There are many types of designs for clinical studies, but they all aim to obtain objective data to evaluate interventions with respect to an associated outcome in a target population. The two main types of clinical study are: (1) clinical trials, in which participants are assigned to receive a certain intervention according to a prespecified research plan and are then followed up prospectively to observe the outcome of interest; and (2) observational studies, in which the study investigators do not assign the exposure or intervention that participants receive. The quality of evidence generated by any study is determined by its experimental design. In all clinical studies, bias may be introduced due to misclassification of interventions or outcomes and missing data. In nonrandomized studies, bias may also be introduced through selection of the included participants and confounding due to differences in prognostic characteristics. These sources of bias may make it difficult or impossible to measure precisely the impact of the intervention under study.

The strongest evidence for causality between the exposure or intervention under study and the outcome observed comes from prospective experimental designs that randomly assign interventions to trial participants. Randomized controlled trials (RCTs) have traditionally been viewed as the gold standard of clinical trial design, residing at the top of the hierarchy of levels of evidence in clinical study; this is because the process of randomization can minimize differences in characteristics of the groups that may influence the outcome, thus providing the most definitive evidence regarding the impact of the exposure or intervention on the outcome. 1 , 2 In an RCT, one or more treatments are compared vs a control group, and patients are assigned to treatment or control by chance, such as by rolling a die or flipping a coin. Each group in an RCT is called an “arm,” so that, for example, a two-arm study may compare an experimental treatment vs a control group, and these would then be referred to as the “treatment arm” and the “control arm,” respectively.

Description of Subtypes of Study Design

Active control.

In an RCT, the control arm can take a variety of forms. If there is a well-established treatment for the disease under consideration, this standard-of-care treatment could then be used as the control arm, against which the novel experimental treatment is compared. In an active control trial, the goal may be to show that the experimental treatment is superior to the standard-of-care treatment (ie, superiority study), to show that the experimental treatment is similar to the standard-of-care treatment (ie, equivalence study), or simply to show that the experimental treatment is not much less effective than the standard-of-care treatment (ie, noninferiority study). If there is already a known treatment for the condition under study that will be used as the active control arm, then it is very important, for ethical reasons, to ensure that there is sound scientific rationale that the experimental treatment will be at least as effective. 3

Placebo Control

In the absence of an effective treatment for a disease, the control arm may consist of a group of patients receiving no treatment or receiving sham treatment, known as a placebo group. In a placebo group, inactive medication is given to patients in a way that is indistinguishable from the active treatment. For example, if patients assigned to the experimental treatment receive a small white capsule that they take three times a day, then the placebo group would receive a small white capsule with similar appearance, although containing no active ingredients, to take three times a day. In this way, effects of taking medication in and of itself, known as the placebo effect, are controlled between the two groups to reduce bias. Placebo-controlled trials seek to show that the experimental treatment is superior to the placebo. If no effective treatment is available for the condition being studied, then there are generally minimal ethical problems with a placebo-controlled trial. However, it is generally inappropriate and unethical to use a placebo if a treatment that improves outcomes is available for the condition under study. 3

Multiple Arm

There are two main types of multiple-arm trials. The first includes multiple dose levels or regimens of the experimental treatment all compared vs a single-control arm. In these so-called dose-response studies, it is ideal to include a zero-dose, or placebo, arm to avoid a situation in which all doses show similar activity and to establish whether any of the doses was superior to no treatment. 3 The second involves a single treatment arm with multiple control arms (eg, both an active control and a placebo control arm).

Cluster Randomized

In a cluster randomized trial, groups of subjects are randomized as opposed to individual subjects. There are several reasons for randomizing clusters as opposed to individuals, including administrative convenience, ethical considerations, and ease of application at the cluster level. This trial design is more common in health services and policy research as opposed to studies of drug interventions. For example, it may be of interest to randomize hospitals in a study of a new educational initiative for physicians both for ease of implementation of the intervention at the hospital level, as well as to avoid within-hospital contamination across individual physicians receiving different interventions. In a cluster randomized trial, it is important to carefully consider the impact of both the number of clusters and the cluster size on the power of the study. 4 A pragmatic trial design known as the stepped wedge cluster randomized trial has been gaining in popularity. In this trial design, aimed at eliminating logistical constraints, each cluster undergoes a period, or “step,” with no intervention followed by a step with exposure to the intervention. 5

Adaptive Design

In traditional RCTs, most trial elements, such as randomization allocations and the number of study arms, are fixed throughout the trial. However, adaptive designs, in which accumulating information during the trial is used to modify some aspect of the trial, are becoming increasingly popular. For example, accumulating information may inform the randomization assignment of the next patient to enroll, which represents a form of adaptive randomization. These designs may allow more patients to be accrued to the arm that is showing more promise, thus reducing ethical concerns about continuing enrollment on fixed randomization designs in the face of possibly increasing evidence that one of the treatments under study is superior and allowing more patients in the course of the trial to be given more effective treatments or doses. 6 , 7 We note that to maintain the established scientific and ethical standards of randomized comparative trials with the acquisition of evidence that is both prospective and objective, it is essential to prespecify potential avenues for adaptation as well as establish corresponding statistical criteria in advance of implementing the trial.

Platform trials describe multiple-arm studies with the potential to include control and experimental arms that can be opened or closed for enrollment throughout the course of the trial based on decision rules regarding efficacy. 8 , 9 In this way, ineffective treatments can be discontinued before many patients have been treated with them, and newly emerging treatments that show promise can be added at any point. These designs allow investigation into more experimental treatments in a shorter period of time. In addition, compared with a series of stand-alone, concurrent two-arm designs, platform designs allow more patients to be assigned to experimental treatment arms as opposed to control arms. 10

Use Cases of Study Design

The medical literature contains a multitude of examples of RCTs across many disease and intervention types. Some examples of recent two-arm randomized controlled trials published in CHEST are presented in Table 1 11 , 12 , 13 , 14 , 15 to demonstrate various applications of RCTs, although these examples are not exhaustive.

Table 1

Examples of Randomized Controlled Trials Published in CHEST

Benefits of Study Design

The primary benefit of RCTs comes from the randomization itself, which greatly reduces confounding from both known and unknown sources. In nonrandomized studies, it may be possible to control for known confounders, but it is much more difficult to control for unknown or unmeasured confounders, although some methods that attempt to do so are available. With randomization, causal conclusions regarding the exposure or intervention and outcome can be made. Additional benefits stem from the controlled and prospective nature of an RCT. The dosage, timing, frequency, and duration of treatment can be controlled, and blinding may be possible. Blinding refers to a treatment assignment being unknown. In a single-blind study, patients do not know which treatment they are receiving. Blinding the patient prevents outcomes from being influenced by knowledge of treatment assignment. This is particularly important if any outcomes are self-reported. In a double-blind study, neither the patient nor the provider knows the treatment assignment. This additionally ensures that any care given by the provider or provider-assessed outcomes are not biased by knowledge of treatment assignment. Blinding is typically not possible in other study designs.

Downsides of Study Design

There are also disadvantages to RCTs. Because RCTs are highly controlled, the inclusion and exclusion criteria may lead to a homogeneous patient population, thus limiting the generalizability of the results to the broader population of patients with the condition being studied. RCTs also tend to study treatments in idealized environments, which may not be perfectly in-line with real-world usage of the treatment. Due to the complexities of design and conduct, RCTs are expensive and can take a long time to complete. RCTs are also not always feasible, for example, if a disease is very rare or if there are special considerations surrounding a disease that make randomized allocation either impractical or unethical.

Study Subject Considerations

An important consideration in the design of an RCT is the subject inclusion and exclusion criteria. These criteria will affect a variety of aspects of the conduct and interpretability of the study results and are primarily meant to ensure patient safety. 16 If eligibility criteria are too strict, it could be difficult to enroll the planned number of patients because of a potentially cumbersome screening process, as many screened patients may prove to be ineligible. In addition, very strict eligibility criteria could result in a study population that is not reflective of the broader target population, thus limiting the generalizability of the study result and the ability to establish the effectiveness of the treatment. These considerations need to be balanced with the fact that more strict exclusion criteria may be necessary to establish an intervention’s efficacy. It is especially important to ensure there are medically valid reasons for excluding any commonly underrepresented groups, such as women, racial and ethnic minority groups, pregnant and breastfeeding women, and children. 17

Equally important as the factors an investigator must consider when establishing patient eligibility criteria are the factors that potential study subjects consider when deciding whether to participate in a given trial. Most patients choose to participate in clinical trials with the hope of receiving a novel therapy from which they may benefit, yet the chance of receiving placebo or standard of care is also likely (often 50% or 33%). For these patients, the effort to participate in the trial is an altruistic act often driven by a desire to further scientific knowledge that may benefit future patients, if not themselves. With this in mind, investigators should additionally consider the burden placed on patients’ time and energy throughout the course of the trial, and weigh that against the scientific importance of additional follow-up visits and invasive or time-consuming testing procedures.

Statistical Considerations

End point definition.

There are several statistical considerations that an investigator must recognize when designing a RCT. The first is a clear and specific definition of the study end point. The end point needs to be an event or outcome that can be measured objectively so that the experimental group can be compared with the control group. For example, a study may wish to compare overall survival, rates of myocardial infarction, or improvements in quality of life. When selecting an end point for an RCT, the investigator must consider how the end point will be measured so that this will be standardized across all patients enrolled in the trial. The timing of assessment of the end point is also important to consider. For example, if interest is in overall survival, patients may need to be followed up for a long time before enough patients have died to determine whether there is a difference between the study arms. Alternatively, if interest is in rates of myocardial infarction, a time frame for occurrence of the event could be defined, such as myocardial infarction within 1 year of treatment start. It is common to have a primary end point in a study that is used as the basis for determining the needed sample size and ultimately making a decision regarding the efficacy of the experimental treatment, and then to include secondary end points as well; these secondary end points are more exploratory in nature and would not be used to make decisions about whether the experimental treatment is better than the control.

Sometimes in a clinical study, it may be of interest to determine whether the experimental treatment is efficacious with respect to more than one end point. In such cases, it is possible to have co-primary end points or to use composite end points. Co-primary end points require that all primary end points indicate that the experimental treatment is superior to control to conclude efficacy, and they require special statistical considerations (discussed later in more detail). Composite end points are a single end point composed of more than one measure. For example, disease-free survival is a composite end point defined as recurrence of disease or death. Use of a composite end point can increase the number of events observed in a study; this use must be considered carefully, however, to ensure that the true outcome of interest will be captured by the composite end point and that the individual components of the composite end point align.

One area of caution in selecting an end point for an RCT relates to the use of surrogate end points. Surrogate end points are used to represent an end point that is either difficult to measure or takes too long to occur. For example, interest may be in death from heart disease but a surrogate end point of change in cholesterol level from baseline may be used. To confidently use a surrogate end point, an investigator must be certain that the effect of the intervention on the surrogate predicts the effect of the intervention on the true clinical end point of interest, which is often difficult or impossible to establish.

Effect Size

Once the end point is specifically defined, the investigators must then establish the meaningful difference between groups that they seek to detect. This is a clinical decision that has statistical implications for the design with respect to the number of patients that will be needed in the study. For example, in a study of an experimental treatment for lung cancer, in which overall survival is the primary end point, we know that 1-year overall survival on the standard-of-care treatment is 70%. Interest is in improving overall survival to 80% in patients on the experimental treatment. These rates can now be used to define the study hypotheses and determine the sample size required to conduct the study.

Power, Error Rates, and Sample Size

The traditional statistical approach considers two possible outcomes of the trial. Consider a two-arm RCT in which p t represents the rate of a binary outcome in the treatment arm and p c represents the rate in the control arm. The null hypothesis then represents the scenario in which the treatment has no benefit and is denoted as H 0 : p t = p c to indicate that the rates are equivalent in the treatment and control arms. The alternative hypothesis represents the scenario in which the treatment and control differ and is denoted as H A : p t ≠ p c to indicate that the rates are not equivalent in the treatment and control arms. Note that in this example we have used what is termed a “two-sided alternative,” meaning that we are looking for the two rates to not be equal, but the rate in the treatment group could be either higher or lower than the rate in the control group. A two-sided alternative such as this provides the most definitive evidence about an experimental treatment. However, the use of a two-sided alternative will require that a larger number of patients be enrolled in the trial, and at times, a “one-sided alternative” (choosing either H A : p t > p c or H A : p t < p c as the specified alternative hypothesis) could be appropriate.

Now that the end point has been defined, the effect size of interest has been established, and the null and alternative hypotheses are fixed, the sample size needed for analysis can be calculated. Sample size is traditionally based on error rates. A type I error is rejecting a null hypothesis when it is true, and a type II error is failing to reject the null hypothesis when it is false, in which case the alternative hypothesis is assumed to be true. The type I error rate is conventionally set to 0.05, meaning that we are willing to accept a 1 in 20 chance that we will claim there is a difference between groups when in truth there is no difference. The complement of type II error (ie, 1 – type II error) is known as statistical power. Statistical power represents the probability of rejecting the null hypothesis when the alternative hypothesis is true. It is commonplace to ensure that statistical power is ≥ 0.8 in an RCT. The specific formula used to calculate the sample size will depend on many things, including the type of end point (eg, binary, continuous, time-to-event) as well as the study design, the type I and type II error rates, the variance of the end point, and allocation ratios to the various study arms. The calculation should also account for patients who drop out or who are lost to follow-up during the study. The details of all possibilities are outside the scope of this commentary, and a statistician should be consulted when designing an RCT.

For trials designed to examine multiple end points, such as the co-primary end points described previously, the possibility of committing a type I error may occur within each end point, yielding two types of false conclusions. 18 Marginal type I error rates estimate the type I error for each end point separately, whereas family-wise type I error rates consider the entire trial in violation if the null hypothesis is falsely rejected for a single end point. A family-wise type I error represents stronger control against false-positive findings for individual end points when directly compared with a marginal type I error. Marginal and family-wise errors are identical when testing a single end point. When multiple end points are being examined in a trial, control of family-wise error can be accomplished through adjustment for multiple comparisons. Common techniques include the Bonferroni procedure, 19 the Šidák procedure, 20 or the Holm’s procedure. 21 There are many others, however, and consultation with a statistician is advised when designing a study that requires adjustment for multiple comparisons.

Methods for Randomization

Figure 1 depicts a flowchart for selecting from among the following methods for randomization.

An external file that holds a picture, illustration, etc.
Object name is gr1.jpg

Flowchart for selecting a method of randomization.

Simple Randomization

In simple randomization, no restrictions are placed on the randomization sequence other than the final sample size. The randomization can be conducted with the flip of a coin or any other procedure that assigns each enrolling patient to an arm with an equal probability (eg, 1/2 for a two-arm study). However, because only the total sample size (and not the per-arm sample sizes) is being controlled in simple randomization, imbalances in the numbers assigned to the two or more arms are possible; this could lead to imbalances in subject characteristics, especially when the total sample size is small.

Permuted Block Randomization

To overcome the possible imbalances that can arise from simple randomization, the permuted-block design divides patients into blocks over time and balances the randomization to each arm within each block. If the total sample size is a multiple of the block size, balance is then guaranteed at the end of the study. 22 It is also possible to use unequal block sizes throughout the study, which would serve to further obscure any predictability of future treatment assignments.

Minimization

Although simple randomization will frequently control imbalance in prognostic factors between arms, it is still possible to have imbalances, especially if the sample size is relatively small (ie, ≤ 100). Often this is overcome through stratified randomization, in which simple randomization is conducted within groups based on important prognostic characteristics to ensure balance within those features. Stratified randomization can become cumbersome as the number of prognostic factors increases, and the strata must be accounted for when analyzing the resulting study data. Another approach, minimization, was introduced as an alternative to stratified randomization. With the minimization approach, assignment to a study arm is done with the intention of achieving balance between randomization groups with respect to prognostic factors of interest. 23 Minimization can greatly reduce prognostic imbalances between groups but at the cost of truly random treatment assignment, as assignment to a study arm is determined by the characteristics of patients who have already been assigned and not according to chance alone.

Outcome Adaptive Randomization

Adaptive randomization refers to randomization procedures that adjust the allocation ratio as the study progresses. In outcome adaptive randomization, the goal is to assign more patients to the more promising treatments based on the accumulating data in the trial to minimize the expected number of treatment failures and to overcome the questionable ethics of continuing to assign patients to treatment arms for which there is evidence of poorer efficacy. 6 The most commonly used outcome adaptive randomization designs are based on the Bayesian statistical paradigm and include Bayesian adaptive randomization 24 and response adaptive randomization. 25

Analytic Considerations

Prior to conducting an RCT, the analysis plan should be detailed. There are several ways that RCT data can be analyzed to account for lack of adherence. Consider a patient who is randomized to the experimental treatment arm but for whatever reason discontinues use of the treatment before completing the trial-specified regimen. How should this patient be incorporated into the statistical analysis at the end of the trial? One approach is “intention-to-treat,” in which all patients are analyzed in the group to which they were randomized, regardless of their adherence to the treatment regimen. Intention-to-treat analysis is recommended in superiority trials to reduce bias, as the original randomization assignment is maintained. An alternative is a “per-protocol” analysis. In this approach, only patients who completed the treatment to which they were originally assigned are analyzed. A per-protocol analysis may lead to a purer estimate of the treatment effect, but the results of the analysis should be interpreted cautiously because bias can be introduced by the reasons subjects did not adhere to the planned treatment regimen. Often in RCTs, intention-to-treat will be the primary analysis approach but a per-protocol analysis will additionally be performed as a secondary analysis.

Another analytic consideration is how to accommodate the potential for treatment effect heterogeneity, which is the possibility that the treatment has different effects in different subgroups of patients. The heterogeneity of treatment effect, in its extreme, can make the overall population effect seem clinically insignificant when there are certain subpopulations that would benefit from the treatment and other subpopulations that do not benefit. To avoid potential inflation of the type I error rate, any subgroup analyses should be specified prior to the trial, rather than as a post hoc attempt at salvaging a trial with a null result. Traditionally, the approach has consisted of a priori subgroup analyses that consider subgroups “one variable at a time,” with results graphically presented in forest plots. However, consensus building research has sought to identify more efficient methods to consider all relevant patient attributes simultaneously with the Predictive Approaches to Treatment Effect Heterogeneity statement. 26

Reporting Considerations

The Consolidated Standards of Reporting Trials guidelines ( http://www.consort-statement.org/ ) were established to guide investigators in appropriate reporting of results of RCTs. These guidelines consist of a 25-item checklist for use in putting together a report on an RCT, as well as a template for a flow diagram to include in the trial report indicating the breakdown of sample size at various stages of the study.

Available Standards

The most comprehensive guidelines regarding the design and conduct of clinical trials are published by the US Food and Drug Administration ( https://www.fda.gov/regulatory-information/search-fda-guidance-documents ). In addition to documents referenced earlier that provide guidance on selection of an appropriate control group, information regarding the use of placebos and blinding is also available. The International Conference on Harmonisation of Technical Requirements for Pharmaceuticals for Human Use have also created guidelines for the conduct of clinical trials that cover the selection of end points, various design options, trial monitoring, and analytic considerations. 27

Short List of Questions to Guide the Researcher

  • 1. Consult with a statistician early and often. It is never too early to involve a statistician in planning the design of an RCT.
  • 2. Select an intervention(s). For treatment trials, dosage and duration of treatment should also be determined.
  • 3. Select an appropriate control arm and consider the practical and ethical considerations of using a placebo or standard of care.
  • 4. Define the study end point as specifically as possible, including when it will be assessed.
  • 5. Establish the effect size of interest. How much improvement are you hoping to see in the experimental treatment arm?
  • 6. Write down the study hypotheses and determine whether a one-sided or two-sided alternative hypothesis is most appropriate.
  • 7. Select acceptable type I and type II error rates and calculate the sample size needed to detect the desired effect size. If multiple primary end points are being used, be sure to consider marginal vs family-wise error rates.
  • 8. Determine the feasibility of accruing the number of patients needed according to the calculated sample size. Can this be accomplished at a single institution, or are multiple institutions needed? If multiple institutions are needed, how does this affect study implementation?
  • 9. Write down the analysis plan in detail before beginning the study.

Short List of Questions to Guide the Reviewer

When reviewing a manuscript describing a randomized controlled trial, consider commenting on the following:

  • 1. The exposure or intervention in the treatment arm and control arm. Was there justification for the exposure or intervention in the treatment arm? If the control arm received standard of care, was an appropriate standard of care applied? If there was a placebo control arm, was there a possibility of distinguishing the treatment and control arms by the nature of the intervention?
  • 2. Key features of the study methodology. Were appropriate study end point(s) chosen? Was their measurement accurate and consistent? Was the randomization procedure appropriate? Were details of the sample size calculation, including the anticipated effect of the intervention, provided? Was drop out handled appropriately in the planning and analysis of the trial? Is the Consolidated Standards of Reporting Trials guidelines flowchart included in the report?
  • 3. The reported results and their interpretation. Were the reported results in line with the planned analyses? Was the interpretation of the results made based on the planned primary end point? Was the interpretation of the results appropriate?

Acknowledgments

Financial/nonfinancial disclosures: The authors have reported to CHEST the following: B. P. H. is scientific advisor to Presagia; consultant for SimulStat; and receives research funds from Amgen. None declared (E. C. Z., A. M. K.).

Role of sponsors: The sponsor had no role in the design of the study, the collection and analysis of the data, or the preparation of the manuscript.

FUNDING/SUPPORT: B. P. H. was supported in part by the Case Comprehensive Cancer Center [P30 CA043703].

404 Not found

IMAGES

  1. Designing a research project: randomised controlled trials and their

    designing a research project randomized controlled trials and their principles

  2. Designing randomised controlled trials

    designing a research project randomized controlled trials and their principles

  3. What are Randomized Control Trials

    designing a research project randomized controlled trials and their principles

  4. Designing A Research Project: Randomised Controlled Trials and Their

    designing a research project randomized controlled trials and their principles

  5. Study design of the randomised controlled trial.

    designing a research project randomized controlled trials and their principles

  6. PPT

    designing a research project randomized controlled trials and their principles

VIDEO

  1. Randomized Controlled Trial

  2. WORKSHOP : Randomized Controlled Trial (RCT) for Islamic Economics and Finance Research

  3. Gupta Program Significantly Improves Health Across 14 Conditions

  4. الحلقه 39 : study design 7 (Randomized Controlled Trial)

  5. Aug 19 Webinar 58 Designing Research for DEI Practical Questions and Applications

  6. Difference between observational studies and randomized experiments?

COMMENTS

  1. Designing a research project: randomised controlled trials and their principles

    Designing a research project: randomised controlled trials and their principles. Designing a research project: randomised controlled trials and their principles. Emerg Med J. 2003 Mar;20 (2):164-8. doi: 10.1136/emj.20.2.164.

  2. Designing a research project: randomised controlled trials and their

    Designing a research project: randomised controlled trials and their principles - PMC. Journal List. Emerg Med J. v.20 (2); 2003 Mar. PMC1726034. As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsement of, or agreement with, the contents by NLM or the National Institutes of Health.

  3. Designing a research project: randomised controlled trials and their

    The sixth paper in this series discusses the design and principles of randomised controlled trials. Designing a research project: randomised controlled trials and their principles | Emergency Medicine Journal

  4. Designing a research project: Randomised controlled trials and their

    A protocol for a meta-epidemiological study. ... Experimental studies within longer-term trials were suggested to see the short-and long-term effects of organic diets. Randomised controlled trials ...

  5. PDF How to design a randomised controlled trial

    Randomised controlled trials (RCTs) are the workhorse of evidence-based healthcare and the only research design that can demonstrate causality, that is, that an intervention causes a direct change ...

  6. Designing a research project: randomised controlled trials and their

    BMJ. 1991 Jun 22;302(6791):1481-1482. [ Europe PMC free article] [ Abstract] [ Google Scholar] Schulz KF, Chalmers I, Hayes RJ, Altman DG. Empirical evidence of bias. Dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA. 1995 Feb 1;273(5):408-412.

  7. Designing a research project: randomised controlled trials and their

    The sixth paper in this series discusses the design and principles of randomised controlled trials. Skip to search ... 164; Corpus ID: 5628578; Designing a research project: randomised controlled trials and their principles @article{Kendall2003DesigningAR, title={Designing a research project: randomised controlled trials and their principles ...

  8. Common Design Concepts in Randomized Controlled Trials

    Abstract. It is a known fact that Randomized Controlled Trials (RCTs) are the gold standard design methods in medical investigations particularly when the aim is comparison of medical therapies or effectiveness of intervention between treatment groups. This design method, once carefully followed, presents the highest level of evidence ...

  9. An Introduction to the Fundamentals of Randomized Controlled Trials in

    INTRODUCTION. The randomized controlled trial (RCT) is regarded as one of the most valued research methodologies for examining the efficacy or effectiveness of interventions. 1 Randomized trials are most often associated with studies of drug effectiveness; however, they have also been successfully applied to research questions related to provision of care by pharmacists. 2 - 5

  10. Research Guides: Study Design 101: Randomized Controlled Trial

    A study design that randomly assigns participants into an experimental group or a control group. As the study is conducted, the only expected difference between the control and experimental groups in a randomized controlled trial (RCT) is the outcome variable being studied. Advantages. Good randomization will "wash out" any population bias

  11. How to design a randomised controlled trial

    Introduction. Randomised controlled trials (RCTs) are the workhorse of evidence-based healthcare and the only research design that can demonstrate causality, that is, that an intervention causes a ...

  12. trials and their principles Designing a research project: randomised

    Corpus ID: 74653742; trials and their principles Designing a research project: randomised controlled @inproceedings{Kendall2009trialsAT, title={trials and their principles Designing a research project: randomised controlled}, author={J M Kendall}, year={2009} }

  13. Understanding Randomized Control Trial Design in Counselling ...

    Abstract. This chapter introduces fundamental elements of RCT design, with an equal interest into the advantages and disadvantages of using RCTs in counselling and psychotherapy research. With illustrative examples, it explores rewards and challenges of conducting an RCT together with 'know how' to begin critiquing published RCT studies.

  14. Randomized Controlled Trials 1: Design

    The randomized controlled trial (RCT) is the gold standard of clinical research when determining the efficacy of an intervention [ 1 ]. Similar to experiments utilizing the scientific control method, it attempts to test a hypothesis about the effect of one variable on another, while keeping all other variables constant.

  15. Designing a research project: randomised controlled trials and their

    Sibbald B, Roland M. Understanding controlled trials. Why are randomised controlled trials important? BMJ. 1998 Jan 17; 316 (7126):201-201. [Europe PMC free article] [] [Google Scholar]Begg C, Cho M, Eastwood S, Horton R, Moher D, Olkin I, Pitkin R, Rennie D, Schulz KF, Simel D, et al. Improving the quality of reporting of randomized controlled trials.

  16. Designing a Research Project: Randomised Controlled Trials and Their

    Downloaded from Designing a research project: randomised. 164 RESEARCH SERIES Emerg Med J: first published as 10.1136/emj.20.2.164 on 1 March 2003. Downloaded from Designing a research project: randomised. ... Randomised Controlled Trials and Their Principles J M Kendall; Sampling Theory STRATIFIED SAMPLING;

  17. Using the Principles of Randomized Controlled Trial Design To Guide

    The decision to use a new test should be based on evidence that it will improve patient outcomes or produce other benefits without adversely affecting patients. In principle, long-term randomized controlled trials (RCTs) of test-plus-treatment strategies offer ideal evidence of the benefits of introducing a new test relative to current best practice. However, long-term RCTs may not always be ...

  18. Designing a research project: Ran... preview & related info

    Designing a research project: Randomised controlled trials and their principles. Kendall J. Emergency Medicine Journal. DOI: 10.1136/emj.20.2.164. 197 Citations. Citations of this article. 1.6k Readers. Mendeley users who have this article in their library. Add to library View PDF.

  19. (PDF) Randomised control trial: An overview

    David Wood. The scientific evidence for cardiac rehabilitation is strong with reductions in total and cardiac mortality of 13% and 26% respectively in randomised controlled trials comparing ...

  20. Evaluating Randomized Controlled Trials

    Randomized controlled trials (RCTs) can provide the strongest evidence when they are well-designed and conducted. Unfortunately, poor study design and methodology may produce misleading results and clinical evidence that may ultimately impact treatment decisions reaching patients. 1 Several studies have evaluated the conduct and reporting of RCTs and have found that more than half of those ...

  21. Randomized Control Trial (RCT)

    A Randomized Control Trial (RCT) is a type of scientific experiment that randomly assigns participants to an experimental group or a control group to measure the effectiveness of an intervention or treatment. ... Designing a research project: randomised controlled trials and their principles. Emergency medicine journal: EMJ, 20(2), 164 ...

  22. Randomized Controlled Trials

    Randomized controlled trials (RCTs) have traditionally been viewed as the gold standard of clinical trial design, residing at the top of the hierarchy of levels of evidence in clinical study; this is because the process of randomization can minimize differences in characteristics of the groups that may influence the outcome, thus providing the ...

  23. Designing a research project: randomised controlled trials and their

    research; randomised control trials; The shuffle control trial (RCT) is a trial on which subjects are randomly assigned to one of two groups: one (the experimental group) enter the intervention that is soul tested, plus the other (the comparison group or control) receiving an alternative (conventional) treatment (fig 1).