Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 08 May 2023

Support for those affected by scientific misconduct is crucial

  • Marret K. Noordewier   ORCID: orcid.org/0000-0001-9084-2882 1  

Nature Human Behaviour volume  7 ,  page 830 ( 2023 ) Cite this article

1064 Accesses

25 Altmetric

Metrics details

  • Scientific community
  • Social sciences

Cases of scientific misconduct can have a massive impact on scholars (especially junior scholars), and repercussions may last years. They need support, writes Marret K. Noordewier.

This is a preview of subscription content, access via your institution

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 12 digital issues and online access to articles

111,21 € per year

only 9,27 € per issue

Rent or buy this article

Prices vary by article type

Prices may be subject to local taxes which are calculated during checkout

Callaway, E. Nature 479 , 15 (2011).

Download references

Author information

Authors and affiliations.

Faculty of Social and Behavioural Sciences; Social, Economic and Organisational Psychology, Leiden University, Leiden, The Netherlands

Marret K. Noordewier

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Marret K. Noordewier .

Ethics declarations

Competing interests.

The author declares no competing interests.

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Noordewier, M.K. Support for those affected by scientific misconduct is crucial. Nat Hum Behav 7 , 830 (2023). https://doi.org/10.1038/s41562-023-01607-8

Download citation

Published : 08 May 2023

Issue Date : June 2023

DOI : https://doi.org/10.1038/s41562-023-01607-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

thesis on scientific misconduct

  • Open access
  • Published: 11 April 2023

Investigating and preventing scientific misconduct using Benford’s Law

  • Gregory M. Eckhartt 1 &
  • Graeme D. Ruxton 1  

Research Integrity and Peer Review volume  8 , Article number:  1 ( 2023 ) Cite this article

6780 Accesses

3 Citations

59 Altmetric

Metrics details

Integrity and trust in that integrity are fundamental to academic research. However, procedures for monitoring the trustworthiness of research, and for investigating cases where concern about possible data fraud have been raised are not well established. Here we suggest a practical approach for the investigation of work suspected of fraudulent data manipulation using Benford’s Law. This should be of value to both individual peer-reviewers and academic institutions and journals. In this, we draw inspiration from well-established practices of financial auditing. We provide synthesis of the literature on tests of adherence to Benford’s Law, culminating in advice of a single initial test for digits in each position of numerical strings within a dataset. We also recommend further tests which may prove useful in the event that specific hypotheses regarding the nature of data manipulation can be justified. Importantly, our advice differs from the most common current implementations of tests of Benford’s Law. Furthermore, we apply the approach to previously-published data, highlighting the efficacy of these tests in detecting known irregularities. Finally, we discuss the results of these tests, with reference to their strengths and limitations.

Peer Review reports

Accounts of scientific misconduct can draw widespread attention. Archetypal cases include the study produced by Wakefield et al. [ 1 ] linking autism to the vaccine against measles, mumps and rubella, and the decade-long misconduct perpetrated by Diederik Stapel [ 2 , 3 ]. The problem, however, is far more widespread than often recognised. A meta-analysis of survey data reports that almost 2% of scientists admitted to having fabricated, falsified or modified data on at least one occasion [ 4 ]. This is perhaps unsurprising in the context of well-established biases towards the publication of significant results [ 5 , 6 , 7 , 8 ]; one study suggesting that the likelihood of publishing clinical trial results with statistically significant or positive findings is nearly three times higher than those with non-significant, negative, or perceived-unimportant results [ 9 ]. One needs only to look through a list of recent retractions to understand the extent of the issue [ 10 ]. The potential consequences of such misconduct are dire, not only in their potential to directly affect human lives, as in the case of unvaccinated children [ 11 ], but also in their capacity for reputational damage, to scientists, institutions, fields of research, and the scientific process itself, at a time when societal confidence in published scientific literature has been shaken; with public figures describing scientific data on phenomena such as climate change as “fake news” [ 12 ].

The verification of data veracity is a key area of failure in this regard. Currently, consensus regarding efficient methods is lacking. Even in areas of science such as medicine, where the quality of data can be directly linked to human outcomes and monetary gain or loss, guidelines are inconsistent and non-specific in the audit and verification of source data [ 13 ]. In many areas of science, peer-review remains the most heavily relied upon means of quality-control in scientific research by journals, whilst academic institutions seem not to focus on prevention or detection, but on investigation only after a whistle has been blown [ 3 ]. Although peer-reviewers have undoubtedly become more familiar with the susceptibility of research to misconduct, there has existed little framework to assist in its investigation. Recently, a checklist was proposed which might be used to flag studies which are more vulnerable to manipulation for further investigation [ 14 ]. However, it was identified that after screening, there is no clear process which reviewers might be directed to in further investigating research data which they suspect may be fraudulent [ 14 ]. We propose that Benford’s Law might provide a useful next step in the investigative process [ 15 ]. Analysing the distribution frequency of financial data with reference to Benford’s Law is a well-established fraud analysis technique in the practice of professional auditing [ 16 ], and its effectiveness has been shown in detecting fabricated data for example in the fields of anaesthesia, sociology and accounting research ([ 17 , 18 , 19 , 20 , 21 ]; see also [ 22 ] where it was not effective for a group of social psychology studies, although we explain later why detection can depend on careful choice of test statistic).

In the present paper we aim to provide a concise set of advice on the implementation of tests of Benford’s Law compliance as a primer for those wishing to further investigate data highlighted as problematic, of value to investigations involving routine monitoring as part of the peer-review process, or those targeted at specific work where concern has been raised. This builds on the seminal works of, for example Diekmann [ 21 ], by synthesising the available literature and providing useful conclusions based on the weight of evidence presented. We discuss the qualities a sample of data might have that make it more or less likely to conform to Benford’s Law, and offer guidance on ways to test for adherence to Benford’s Law statistically. We then take an example from animal personality data to explore the test’s effectiveness with real data in a field to which it has not been previously applied and explore how statistical testing can be augmented by the use of comparator data-sets that are not under suspicion. Ultimately, we thus aim to contribute to the conception of an overall framework which investigators might refer to in the inspection of potentially fraudulent research.

Identifying abnormal patterns in data

Benford’s Law is a well-established observation that, in many numerical datasets, a distribution of first and higher order digits of numerical strings has a characteristic pattern. The observation is named after the physicist Frank Benford [ 15 ] who reported it in a paper regarding “The Law of Anomalous Numbers”, although it was actually first stated by Simon Newcomb [ 23 ] and is sometimes referred to as the Newcomb-Benford Law. In its light version, it states that the first digit, d , of numerical strings in datasets that follow this distribution is more likely to be 1 than any other value, with decreasing probability, P(d) , of the digit occurrence as it increases in value (see Eq. 1 below and Fig.  1 ). This phenomenon can be observed across a wide array of datasets, including natural data such as global infectious disease cases and earthquake depths [ 24 ], financial data [ 25 ], genome data [ 26 ], and mathematical and physical constants [ 15 ].

figure 1

Benford’s Law for the first digit . Graphical depiction of Benford’s Law as applied to the first digits of a notional dataset that perfectly fits the law, displaying the characteristic negative logarithmic curve of occurrence probability, P(d) , as the digit value increases

where i  = 1 and 1 ≤  d  ≤ 9

Furthermore, the law can be generalised to digits beyond the first, such that we can predict the probability of occurrence, P(d) , of any digit, d , in any position, i , within a given string using the conditional probabilities of the preceding digits ([ 27 ]; see Table  1 and Eq. 1 (for i  = 1) & 2 (for i  > 1)). This can be especially important in assessing adherence to a Benford’s Law distribution, as data fabricators will often neglect to conform digits subsequent to the first to any kind of natural distribution [ 21 ].

Where i > 1

Deviations from Benford’s Law then, in datasets where we expect to see adherence to this digit distribution, can raise suspicion regarding data quality. Indeed, financial auditors have been using Benford’s Law for some years to test datasets’ adherence to the expected distribution in order to detect possible fraudulent manipulation [ 16 ]. It has also been applied recently in the analysis of COVID-19 data and the potential spuriousness of some countries’ self-reported disease cases [ 28 , 29 ]. Accordingly, it has been suggested that Benford’s Law provides a suitable framework against which scientific research data can be inspected for possible indications of manipulation [ 21 , 30 ].

In order to do so, we must first define datasets which are appropriate for this use and for which we would expect to see adherence to BL. In general, it is expected that datasets where individual values span multiple orders of magnitude are more likely to abide by BL. There is no set minimum number of datapoints, although a good rule of thumb can be derived from a power analysis by Hassler and Hosseinkouchack [ 31 ], that generally the statistical tests for deviations from Benford’s Law will be most effective with at least N  ≥ 200. However, even for sample sizes as small as 20, some testing may be worthwhile (see [ 32 ] for approaches in this case).

This assumption being satisfied, we should more specifically expect data with a positively skewed distribution, as is common in naturally occurring data (such as river lengths or fishery catch counts), to adhere to BL. This includes such distributions as the exponential, log-logistic, and gamma distributions [ 33 ]. Furthermore, we can expect figures derived from combinations or functions of numbers such as financial debtors balances, where price is multiplied by a quantity [ 34 ], or the regression coefficients of papers within a journal [ 21 ], to conform with Benford’s Law. Note that this should be true irrespective of the unit of measurement, i.e. the distribution of digits should be scale invariant [ 27 ].

There are also some cases where we might expect digits following the first, but not the first digit of some data to follow Benford’s Law. For example, stock market indexes such as the FTSE 100 over time, for which the magnitudes of the first digits are constrained (having never exceeded 8000 at the time of writing) but for which the subsequent digits do follow the expected Benford’s Law distribution reasonably closely.

Equally, there are many datasets for which a Benford’s Law digit distribution may not be appropriate. This is true of data that is normally or uniformly distributed. The Benford’s Law digit distribution should also be expected not to be met by data that is human-derived to the extent that no natural variation would be expected, such as prices of consumer goods, or artificially selected dependent variables such as the volume of a drug assigned to different treatment groups [ 33 , 34 ]. Ultimately, the reviewer must apply professional judgement and scepticism in choosing appropriate datasets for analysis by reference to a Benford distribution. Implicit in this is the requirement that investigators determine and justify whether data should be expected to conform to Benford’s Law prior to any testing of that conformity. Table  2 provides a non-exhaustive summary of properties of appropriate and inappropriate data for Benford analysis.

Once an appropriate dataset has been selected, we may assess conformance to Benford’s Law in a number of ways. There are several options to choose from in testing adherence to Benford’s Law statistically. Goodness-of-fit tests, including for example Cramér–von Mises, Kolmogorov-Smirnov, or Pearson’s 𝜒 2 -test, might seem most appropriate, and indeed seem to be the most often used tests in the Benford’s Law literature [ 31 ]. Determining the best test is not as simple as it may appear however, with consideration of sensitivity to different types of deviation from the law, avoidance of mistakenly suggesting deviation where none exists, interpretability and parsimony.

Hassler and Hosseinkouchack [ 31 ] conducted power analysis by Monte-Carlo simulation of several statistical tests of adherence to Benford’s Law using various sample sizes up to N = 1000, including Kuiper’s variant of the Kolmogorov-Smirnov test, Cramér–von Mises, Pearson’s 𝜒 2 -test with 8 degrees of freedom (9 for i  > 1), (Eq. 3 below), and a variance ratio test developed by the authors [ 35 ]. They found all of these tests to be underpowered at detecting the types of departure investigated in comparison to the simple 𝜒 2 -test with one degree of freedom suggested by [ 36 ], (Eq. 4 ), which compares the mean of the observed frequency of d to that of the expected frequency. They recommend further, that for Benford’s Law for the first digit, greater power can be achieved by a one-sided mean test ‘Ζ’, (Eq. 5 ), if one can justify the a priori assumption that the alternative hypothesis is unidirectional. This may be assumed if we believe a naïve data fabricator might tend to fabricate data with first digit probabilities closer to a uniform distribution, biasing the probability of higher-order digits in the first position, thus increasing the mean, \(\overline{d}\) , of the observed first digits in comparison to the expected mean, E(d) (see a summary of E ( d ) in Table 1 ); although see Diekmann [ 21 ] who suggests that fabricators may intuitively form a reasonable distribution of first but not second digits. Accordingly, the null hypothesis in Ζ is rejected where \(\overline{d}>E(d).\)

What we refer to as the 𝜒 2 -test with 8 or 9 degrees of freedom, the 𝜒 2 -test with one degree of freedom and the Z test, respectively, have calculated values as defined below:

N is the number of observed digits

d is an index for each possible digit

h d is the observed frequency of digit d (such that the sum of these frequencies adds up to 1)

p d is the expected frequency of digit d (see Table 1 )

\(\overline{d}\) is the mean of the N observed digits ( \(\overline{d}={N}^{-1}\sum_{j=1}^N{d}_j\) ) and d j is the observed digit value at the relevant position corresponding to datapoint j of the dataset of N observed digits, where 1 ≤  j  ≤  N .

E(d) is the expected digit mean (see Table 1 )

σ d is the standard deviation of expected digits (see Table 1 )

Further simulations can be seen in Wong [ 37 ], using greater sample sizes, suggesting, in the absence of the variance ratio and 𝜒 2 -test with one degree of freedom tested in Hassler and Hosseinkouchack [ 31 ], that Cramer von-Mises or Anderson-Darling tests can provide the greatest power to detect some types of deviation. More importantly however, Wong [ 37 ], having simulated with greater sample sizes, suggests that with increasing sample sizes (N > ~ 3000), the rejection rate of the null hypothesis, in any such test, increases significantly, even for distributions that deviate only very slightly from the null distribution.

With consideration to statistical power, complexity, interpretability, and parsimony, we therefore recommend that Pearson’s 𝜒 2 -test with one degree of freedom, Eq. 4 , provides an effective overall test statistic for the adherence to Benford’s Law of an appropriate dataset. Furthermore, when testing for adherence to Benford’s Law for the first digit only, we echo the sentiments of Hassler and Hosseinkouchack [ 31 ], that it may be appropriate to increase the power of the test by assuming a unidirectional alternative hypothesis and applying a one-tailed variant of the test. Of course, investigators may often want to utilise multiple tests. Indeed, there is reason in some cases to argue that the tests of digit means in Eqs. 4 & 5 are less informative than the chi-squared test in Eq. 3 . These tests are useful as a first port of call when testing general hypotheses regarding the distribution of fabricated digits, however they are on odd occasions less sensitive than Eq. 3 to substantial variations in individual digits. For example, if we believe that a fabricator might produce an overabundance of fives and zeros in the second position of numerical strings than is expected naturally, Eqs. 4 & 5 may not detect this if the mean value of digits in this position are compensated by the distribution of the other digits. In such a situation it is of value to adopt a further statistic, and the chi-square test in Eq. 3 is generally a useful option.

It is important to note that statistically significant deviations from Benford’s Law need not be caused by fraudulent manipulation, as typified by the suggestion of Wong [ 37 ], that greater and greater sample sizes will increase the likelihood very small deviations from the null distribution being detected. Also testing multiple digit positions within the same data-set will increase the chance of type I error. This should be acknowledged, or controlled for using a procedure like Bonferroni correction, or a compound test across multiple digits used (see [ 32 ] for useful approaches in this regard). Data irregularities may also arise as a result of error rather than manipulation. Even with the most parsimonious test, caution and forethought must be applied in the use of such tests with certain datasets. We recommend plotting the expected and observed distributions of digits as an intuitive means of estimating the strength of any deviation from the expected distribution. A reusable code snippet has been provided in the additional file (part 1. Reusable Benford’s Law tests and graphs) which may be used to extract digits from numerical strings in a dataset, plot the associated distributions, and apply the tests under Eqs. 3 to 5 . Investigators may also prefer to use the benford.analysis package for plotting [ 38 ].

Whilst it is provable mathematically that a scale-neutral, random sample of numbers selected from a set of random probability distributions will follow Benford’s Law [ 27 ], Benford’s Law is not immutable or irrefutable for real data. Whilst we can observe that Benford’s Law holds remarkably well for certain datasets, reflecting Hill’s theoretical proof and the idea that such data is ultimately the product of random processes and random sampling, in reality we know that no such dataset is truly completely random in its construction or sampling. As such, we can expect minor deviations from Benford’s Law even in datasets which fit all of the supposed criteria for suitable data. Thus, it is not possible to prove unquestioningly that some set of data should, or should not, follow an exact distribution such as Benford’s Law. Justification for expecting a given data set to conform to Benford’s Law can come from discussion of the criteria already mentioned, but also from demonstrated conformity to Benford’s Law of similar independently-obtained datasets of similar data. Thus, we suggest that investigations of a suspect dataset through exploration of adherence to Benford’s Law will be greatly strengthened if appropriate “control” datasets are subject to the same testing. This we put to the test in the following section, " Application to real data ". Clearly, ideally the person carrying out such testing should be blind to which datasets are controls and which are the focal suspect ones.

Application to real data

In order to sufficiently demonstrate the efficacy of the described approach, we have applied the test of conformity to Benford’s Law to a number of existing publicly available datasets. First, we applied the test to datasets from publications which were retracted for suspected irregularities in the data. We then compared this to similar datasets with no such retractions or public suspicions of data abnormalities, to assess whether and when the test does or does not detect known irregularities.

First, we sought to identify published articles with data which is likely to contain irregularities. We used Retraction Watch Database [ 10 ] to search for retracted research articles tagged with expressions of concern about the underlying data. We limited this search to articles published by ‘Royal Society Publishing’, which has implemented a strong open data policy since 2013 [ 39 , 40 ]. It is perhaps unsurprising that it is otherwise exceedingly difficult to find publicly available data from publications which have been retracted for data issues. The exact search criteria used can be found in the additional file (part 2. Searches and methods). We manually scanned each of the 23 items identified by this search (some of which were duplicates of the same article with different levels of notice), identifying two articles which met all of the criteria for testing, including publicly available and practically useable data, suitability for Benford’s Law analysis, and retraction for issues in the underlying data (henceforth, articles 1 & 2, see Table  3 ). The conclusions of both studies generally rely on data concerning individual differences within consistent aspects of animal behaviour, or ‘personality’ as it is often referred to [ 46 ]. This is a natural phenomenon which is well-researched within behavioural ecology and generally understood to be the result of natural processes of genetic expression and environment. Data resulting from many methods of personality measurement, such as the time for a fish to emerge from a refuge after being placed in a novel site (e.g. [ 45 , 47 ]), are found to have distributions across populations which mimic that of other natural processes, being positively skewed [ 48 ] and thus conforming to the criteria outlined in " Identifying abnormal patterns in data " (see Table 2 ).

figure 2

Benford’s Law tests for articles 1 & 2. Distribution of digit value frequencies for the 1st (left panels) and second (right panels) digit positions of data from datasets of animal personality measures, taken from research articles retracted for suspicions of data fabrication, with 95% Sison & Glaz confidence intervals represented by the dashed lines. Dots represent the Benford expected frequency of digits, whilst the solid line represents the observed frequency. Top 2: Article 1. Bottom 2: Article 2

figure 3

Benford’s Law tests for articles 3 to 5. Distribution of digit value frequencies for the 1st (left panels) and second (right panels) digit positions of data from datasets of animal personality measures, taken from research articles not retracted for suspicions of data fabrication, with 95% Sison & Glaz confidence intervals represented by the dashed lines. Dots represent the Benford expected frequency of digits, whilst the solid line represents the observed frequency. Top 2: Article 3. Middle 2: Article 4. Bottom 2: Article 5

Next, we sought to identify research articles with data on the same type of phenomena, which were as similar as possible to articles 1 & 2, but differentiated by having no notices of retraction or public suspicions of irregularities within the data. To achieve this, we identified the general topic of the two retracted articles and created a search directly within the Royal Society Publishing website’s journal search tool. This centred around research articles with titles containing the words ‘personality’, ‘boldness’ or ‘bold’, published from 2014 onwards (as in the previous search). The search criteria and results can be found in the additional file (part 2. Searches and methods). The first 30 publication results were manually scanned for appropriate data. Eight publications were deemed to be appropriate based on methodological similarities with articles 1 & 2, conformance with the criteria in " Identifying abnormal patterns in data ", and the availability and useability of the underlying data.

For methodological convenience, such studies of personality often constrain the maximum values of time to emerge/ resume activity, assigning a maximum value where the animal is found not to have emerged/ resumed activity after a specified length of time. In this way, the associated data contain several experimenter-assigned numbers, which artificially skew the data and inflate the number of zeros in the digits subsequent to the first of numerical strings within the data. In general Benfords Law is not expected to apply (at least to the first digit) when data contains an imposed maximum and/or minimum value. As such, we only analyse a subset of the data for these sets, being all data with values less than the artificial maximum value assigned by the authors. Accordingly, we sought to test datasets with the highest levels of useable data for Benford analysis. Data was therefore required to be as numerous as possible to maximise the power of analysis, whilst maximising the available orders of magnitude, being those datasets with the greatest artificially-assigned maximum. In the absence of a strong argument to favour either criterion, we chose to rank each of the eight studies according to the two criteria with equal weighting. In this way, we could empirically determine the three studies with the highest combined rank for testing (articles 3 to 5, see Table 3 and the additional file ; part 2. Searches and methods). For consistency, only time data on personality was assayed across all five datasets.

For each of the datasets identified in accordance with the criteria above then, we used R version 4.0.4 to extract the digits from the numerical strings of each datapoint to ascertain the distribution frequency of digits in the first and second positions. Using those distribution frequencies, we were able to visualise conformity with Benford’s Law and estimate the goodness-of-fit using chi-squared and Ζ tests in accordance with Eqs. 3 – 5 outlined in " Identifying abnormal patterns in data ". Simultaneous confidence intervals were estimated and graphed for each set of digits using the method of Sison and Glaz [ 49 ], which can account for multinomial proportions, employing the R package MultinomialCI [ 50 ]. The code employed in analysing these datasets is available in the OSF repository [ 51 ]. This would easily be modified for readers interested in conducting similar analyses. In this regard there is also a useful R package benford.analysis [ 38 ].

Under \({\upchi}_1^2\) , articles 1 & 2 deviated significantly from Benford’s Law for digits in the first and second positions, whilst they did not deviate significantly for articles 3 to 5 (summarised in Table 3 ). Under Ζ, none of the articles deviated significantly for first position digits. This is due to \(\overline{d}<E(d)\) in both instances, thus rejecting the null hypothesis. Finally, under \({\upchi}_{8\ or\ 9}^2\) , articles 1, 2 & 3 deviated significantly from Benford’s Law for digits in the first position. However, for digits in the second position, only articles 1 & 2 deviated significantly from Benford’s Law.

As can be seen in Table 3 , for both 1st and 2nd digits, \({\upchi}_1^2\) raised concerns about the data in the two articles that had already been identified as problematic, but never for the three comparator datasets. Conversely, Ζ raised no concerns about any of the articles, and \({\upchi}_8^2\) raised concerns about the two “problematic” articles, but also suggested a possible “false positive” concern about article 3.

Generally, the present results build on the growing evidence base indicating that Benford’s Law is an effective means of screening data for potential fabrication (e.g. [ 21 , 30 , 52 ]). Furthermore, the results of this study highlight the importance of understanding the data that one is investigating, as well as the limitations and advantages of different tests of adherence to Benford's Law. For example, although the chi-squared test with one degree of freedom (Eq. 4 ) performed well using the distribution of first digits to flag data which was known to contain issues, the variant of the chi-squared test under 8 or 9 degrees of freedom (Eq. 3 ) did not, while the one-sided Z-test (Eq. 5 ) proved insensitive. We therefore reiterate our earlier statement that Eq. 4 is a useful tool for initial screening whilst Eqs. 3 & 5 , together with exploratory visual analysis of graphs, can be useful in testing specific hypotheses regarding the nature of potential data fabrication. Indeed, visual analysis of the graph of observed first digits from article 3 reveal little concern despite the possible “false positive” indicated by Eq. 3 .

Given that the data does not span more than 3 to 4 orders of magnitude, one might argue that tests for digits in the first position inflate the likelihood of error compared with digits in the second position. In the case of the Z-test and chi-squared test with 1 degree of freedom, this means that it is difficult to justify the assumption that the expected digit mean might resemble that of Benford's Law. In this case, it is of comfort that we are able to apply the model to digits beyond the first, where the distributions of digits are less affected by orders of magnitude. Indeed, the “false positive” identified here builds on the “false negative” findings by Diekmann [ 21 ], illustrating that Benford’s Law tests are often more effective at flagging data issues using the distribution of second and higher digits [ 21 ].

Consistent with the published notices of retraction to articles 1 & 2 [ 53 , 54 ], the tests employed in the present study flagged issues in the data which, upon closer inspection, contained inexplicable duplications. In the case of both articles, retractions were issued just less than 6 years after the publication of the original articles. It is argued that the journals might have much more quickly detected this error using the tests employed in the present study, and in so doing have protected their reputations, and the integrity of scientific literature more generally. With this being said, it is commendable to have required the public availability of source data in the first place, without which such scrutiny and re-examination would not be possible. We argue that scientific integrity would be improved immeasurably by the standardisation of such requirements upon publication. Furthermore, we argue that the use of statistical tests such as those outlined here provide a useful foundation on which to build a framework for the prevention and detection of scientific misconduct through the manipulation of data, which might be used by individual peer-reviewers, academic journals, and scientific institutions alike. However, given the risk of any statistical test of false positives (and negatives), statistical testing can only be a part (albeit a valuable one) of investigating potential fraud.

It is important to note, that fraudulent data manipulation may manifest in ways that are less detectable by analyses of adherence to BL, or be present in data that is not appropriate for such analyses as they would not be expected to adhere to BL. It is of comfort therefore, that the statistical toolbox for investigators is vast, given the appropriate expertise. For example, an investigator might test the hypothesis that a researcher has fabricated clinical trial data for two supposedly randomised trial groups by assessing the under- or over-dispersion of the summary statistics. Indeed Barnett [ 55 ] provides a comprehensive analysis of such a test’s effectiveness, concluding that it can be a useful flag of suspect clinical trials in targeted checks. It might reasonably also be applied to the statistics of other between-groups experimental data. The consideration of a broad range of statistical tests will be of great import in the journey towards a framework for the detection and prevention of scientific misconduct. Recent work has demonstrated in the context of international trade data, how we might identify features of data for which Benford’s Law should hold in the absence of fraudulent data manipulation, how application of the law can be modified where conformity cannot be expected, and how evidence of such fraudulent activity can be gathered in this context [ 56 ]. Exploration of the applicability of these findings to other areas of potential data manipulation would further valuably expand our detection toolkit. Similarly, recent work [ 32 ] has suggested that testing procedures that use a combination of existing tests can be very effective at detecting departures from Benford’s Law even for datasets with as few as 20 datapoints. Further exploration of these approaches, perhaps in exploring their performance on datasets already considered a matter of concern, like the approach taken here, would be very valuable.

Conclusions

It is of consummate importance that confidence in science is maintained. In providing a unified approach by which reviewers might investigate suspect data, and empirically validating its efficacy, it is hoped that we have suggested the potential to improve the assurance we gain over scientific data. It remains a significant issue that the controls over source data in scientific literature are clearly not sufficient. It is hoped however, that in describing a practical approach, academic institutions and publishers might consider some level of reform or improvement in the controls employed in preventing and detecting scientific misconduct. Heightened rigour in the scrutiny of scientific research is inevitable. Ultimately, the leaders and first-adopters in this field would be rewarded by mitigating their risk of association with fraudsters, and contributing to the ethical maintenance of truth in science.

Availability of data and materials

The datasets analysed during the current study are publicly available in the repositories associated with [ 41 , 42 , 43 , 44 , 45 ]. The code employed in analysing these datasets is available in the OSF repository [ 51 ]. The results and criteria of the searches employed in the " Methods " section are also available as the additional file part 2. The data analysed in the additional file part 1 is available in the World Bank Group repository, https://data.worldbank.org/ [ 57 ].

Wakefield AJ, Murch SH, Anthony A, Linnell J, Casson DM, Malik M, et al. RETRACTED: Ileal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children: Elsevier; 1998.

Google Scholar  

Levelt Committee, Noort Committee, Drenth Committee. Flawed science: the fraudulent research practices of social psychologist Diederik Stapel: University of Tilburg; 2012. [cited 10 Sep 2022]. Available from: https://www.tilburguniversity.edu/nl/over/gedrag-integriteit/commissie-levelt

Stroebe W, Postmes T, Spears R. Scientific misconduct and the myth of self-correction in science. Perspect Psychol Sci. 2012;7(6):670–88.

Article   Google Scholar  

Fanelli D. How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS One. 2009;4(5):e5738.

Dickersin K. The existence of publication bias and risk factors for its occurrence. Jama. 1990;263(10):1385–9.

Dickersin K. Publication bias: recognizing the problem, understanding its origins and scope, and preventing harm. Publication bias in meta-analysis: prevention, assessment and adjustments; 2005. p. 11–33.

Jennions MD, Moeller AP. Publication bias in ecology and evolution: an empirical assessment using the ‘trim and fill’method. Biol Rev. 2002;77(2):211–22.

Fanelli D. Do pressures to publish increase scientists’ bias? An empirical support from US states data. PLoS One. 2010;5(4):e10271.

Hopewell S, Loudon K, Clarke MJ, Oxman AD, Dickersin K. Publication bias in clinical trials due to statistical significance or direction of trial results. Cochrane Database Syst Rev. 2009;1.

The Retraction Watch Database. New York: The Center for Scientific Integrity. 2022 [cited 25/05/2022]. Available from: http://retractiondatabase.org/ .

DeStefano F, Shimabukuro TT. The MMR vaccine and autism. Annu Rev Virol. 2019;6:585–600.

Allen DE, McAleer M. Fake news and indifference to scientific fact: president Trump’s confused tweets on global warming, climate change and weather. Scientometrics. 2018;117(1):625–9.

Houston L, Probst Y, Martin A. Assessing data quality and the variability of source data verification auditing methods in clinical research settings. J Biomed Inform. 2018;83:25–32.

Parker L, Boughton S, Lawrence R, Bero L. Experts identified warning signs of fraudulent research: a qualitative study to inform a screening tool. J Clin Epidemiol. 2022;151:1–17.

Benford F. The law of anomalous numbers. Proc Am Philos Soc. 1938;78(4):551–72.

Nigrini MJ. Benford’s Law: applications for forensic accounting, auditing, and fraud detection. Hoboken: Wiley; 2012.

Book   Google Scholar  

Hüllemann S, Schüpfer G, Mauch J. Application of Benford’s law: a valuable tool for detecting scientific papers with fabricated data? Anaesthesist. 2017;66(10):795–802.

Hein J, Zobrist R, Konrad C, Schuepfer G. Scientific fraud in 20 falsified anesthesia papers. Anaesthesist. 2012;61(6):543–9.

Schüpfer G, Hein J, Casutt M, Steiner L, Konrad C. From financial to scientific fraud: methods to detect discrepancies in the medical literature. Anaesthesist. 2012;61(6):537–42.

Horton J, Kumar DK, Wood A. Detecting academic fraud using Benford law: the case of professor James Hunton. Res Policy. 2020;49(8):104084.

Diekmann A. Not the first digit! Using benford's law to detect fraudulent scientif ic data. J Appl Stat. 2007;34(3):321–9.

Auspurg K, Hinz T. Social dilemmas in science: detecting misconduct and finding institutional solutions. In: Social dilemmas, institutions, and the evolution of cooperation; 2017. p. 189–214.

Chapter   Google Scholar  

Newcomb S. Note on the frequency of use of the different digits in natural numbers. Am J Math. 1881;4(1):39–40.

Sambridge M, Tkalčić H, Jackson A. Benford’s law in the natural sciences. Geophys Res Lett. 2010;37(22).

Geyer CL, Williamson PP. Detecting fraud in data sets using Benford's law. Commun Stat-Simul Comput. 2004;33(1):229–46.

Friar JL, Goldman T, Pérez-Mercader J. Genome sizes and the Benford distribution. PLoS One. 2012;7(5):e36624.

Hill TP. Base-invariance implies Benford’s law. Proc Am Math Soc. 1995;123(3):887–95.

Lee K-B, Han S, Jeong Y. COVID-19, flattening the curve, and Benford’s law. Physica A: Stat Mech Appl. 2020;559:125090.

Kennedy AP, Yam SCP. On the authenticity of COVID-19 case figures. PLoS One. 2020;15(12):e0243123.

Gauvrit NG, Houillon J-C, Delahaye J-P. Generalized Benford’s law as a lie detector. Adv Cogn Psychol. 2017;13(2):121.

Hassler U, Hosseinkouchack M. Testing the newcomb-Benford law: experimental evidence. Appl Econ Lett. 2019;26(21):1762–9.

Cerasa A. Testing for Benford’s law in very small samples: simulation study and a new test proposal. PLoS One. 2022;17(7):e0271969.

Formann AK. The Newcomb-Benford law in its relation to some common distributions. PLoS One. 2010;5(5):e10541.

Durtschi C, Hillison W, Pacini C. The effective use of Benford’s law to assist in detecting fraud in accounting data. J Forens Account. 2004;5(1):17–34.

Hassler U, Hosseinkouchack M. Ratio tests under limiting normality. Econ Rev. 2019;38(7):793–813.

Tödter K-H. Benford’s law as an Indicator of fraud in economics. Ger Econ Rev. 2009;10(3):339–51.

Wong SCY. Testing Benford’s law with the first two significant digits [thesis on the internet]: University of Victoria (AU); 2010. [cited 10 Sep 2022]. Available from: http://dspace.library.uvic.ca/handle/1828/3031

Cinelli C. Package ‘benford.Analysis’; 2018.

Royal Society Publishing. Data sharing and mining | Royal Society. 2016. [Last accessed: 08/07/2022]. Available from: https://royalsociety.org/journals/ethics-policies/data-sharing-mining/ .

FAIRsharing.org : The Royal Society - Data sharing and mining, DOI: https://doi.org/10.25504/FAIRsharing.dIDAzV , Last Edited: Friday, December 10th 2021, 15:12, Last Editor:allysonlister, [Last Accessed: 08/07/2022].

Laskowski KL, Pruitt JN. Evidence of social niche construction: persistent and repeated social interactions generate stronger personalities in a social spider. Proc R Soc B Biol Sci. 2014;281(1783):20133166.

Modlmeier AP, Laskowski KL, DeMarco AE, Coleman A, Zhao K, Brittingham HA, et al. Persistent social interactions beget more pronounced personalities in a desert-dwelling social spider. Biol Lett. 2014;10(8):20140419.

Guenther A. Life-history trade-offs: are they linked to personality in a precocial mammal (Cavia aperea)? Biol Lett. 2018;14(4):20180086.

Hulthén K, Chapman BB, Nilsson PA, Hollander J, Brönmark C. Express yourself: bold individuals induce enhanced morphological defences. Proc R Soc B Biol Sci. 2014;281(1776):20132703.

Klemme I, Karvonen A. Learned parasite avoidance is driven by host personality and resistance to infection in a fish–trematode interaction. Proc R Soc B Biol Sci. 1838;2016(283):20161148.

Carter AJ, Feeney WE, Marshall HH, Cowlishaw G, Heinsohn R. Animal personality: what are behavioural ecologists measuring? Biol Rev. 2013;88(2):465–75.

Gasparini C, Speechley EM, Polverino G. The bold and the sperm: positive association between boldness and sperm number in the guppy. R Soc Open Sci. 2019;6(7):190474.

Limpert E, Stahel WA, Abbt M. Log-normal distributions across the sciences: keys and clues: on the charms of statistics, and how mechanical models resembling gambling machines offer a link to a handy way to characterize log-normal distributions, which can provide deeper insight into variability and probability—normal or log-normal: that is the question. BioScience. 2001;51(5):341–52.

Sison CP, Glaz J. Simultaneous confidence intervals and sample size determination for multinomial proportions. J Am Stat Assoc. 1995;90(429):366–9.

Villacorta PJ, May W, Collate'aux-fn R. Package ‘MultinomialCI’; 2021.

Eckhartt GM. Data for: investigating and preventing scientific misconduct using Benford’s Law. Data for: investigating and preventing scientific misconduct using Benford’s Law. osf.io/2b6v8; 2022.

Hales DN, Chakravorty SS, Sridharan V. Testing Benford’s law for improving supply chain decision-making: a field experiment. Int J Prod Econ. 2009;122(2):606–18.

Kate L, Laskowski APM, DeMarco AE, Coleman A, Zhao K, Brittingham HA, et al. Retraction: persistent social interactions beget more pronounced personalities in a desert-dwelling social spider. Biol Lett. 2020;16(2):20200062.

Kate L, Laskowski JNP. Retraction: evidence of social niche construction: persistent and repeated social interactions generate stronger personalities in a social spider. Proc R Soc B Biol Sci. 2020;287(1919):20200077.

Barnett A. Automated detection of over-and under-dispersion in baseline tables in randomised controlled trials; 2022.

Cerioli A, Barabesi L, Cerasa A, Menegatti M, Perrotta D. Newcomb–Benford law and the detection of frauds in international trade. Proc Natl Acad Sci. 2019;116(1):106–15.

The World Bank Organisation. World Population Data. [Internet]. 2020 [cited 27 Apr. 2022]. Available from: https://data.worldbank.org/indicator/SP.POP.TOTL

Download references

Acknowledgements

We thank three referees for valuable comments and suggestions.

Not applicable.

Author information

Authors and affiliations.

School of Biology, University of St Andrews, St Andrews, KY16 9TH, UK

Gregory M. Eckhartt & Graeme D. Ruxton

You can also search for this author in PubMed   Google Scholar

Contributions

GE made substantial contributions to the conception of the work, analysis of the data, and drafted the work. GR made substantial contributions to the conception of the work and substantively revised it. Both authors read and approved the final manuscript.

Authors’ information

GE is an accredited member of the Institute of Chartered Accountants in England and Wales (ICAEW) and a former financial auditor.

Corresponding author

Correspondence to Gregory M. Eckhartt .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Eckhartt, G.M., Ruxton, G.D. Investigating and preventing scientific misconduct using Benford’s Law. Res Integr Peer Rev 8 , 1 (2023). https://doi.org/10.1186/s41073-022-00126-w

Download citation

Received : 06 June 2022

Accepted : 13 December 2022

Published : 11 April 2023

DOI : https://doi.org/10.1186/s41073-022-00126-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Scientific misconduct
  • Peer review
  • Benford’s Law
  • Benford’s Law tests
  • Retracted article testing
  • Animal behaviour

Research Integrity and Peer Review

ISSN: 2058-8615

thesis on scientific misconduct

July 26, 2023

Controversial Physicist Faces Mounting Accusations of Scientific Misconduct

Allegations of data fabrication have sparked the retraction of multiple papers from Ranga Dias, a researcher who claimed discovery of a room-temperature superconductor

By Daniel Garisto & Nature magazine

Ranga Dias with a blue sweater in front of a brown wooden door in the hallways of Rochester University

Ranga Dias, a physicist at the University of Rochester in New York, is at the center of a controversy over room-temperature superconductivity claims.

Lauren Petracca/The New York Times/Redux

A prominent journal has decided to retract a paper by Ranga Dias, a physicist at the University of Rochester in New York who has made controversial claims about discovering room-temperature superconductors — materials that would not require any cooling to conduct electricity with zero resistance. The forthcoming retraction, of a paper published by  Physical Review Letters  ( PRL ) in 2021, is significant because the  Nature  news team has learnt that it is the result of an investigation that found apparent data fabrication.

PRL ’s decision follows  allegations that Dias plagiarized substantial portions of his PhD thesis  and a separate  retraction of one of Dias’s papers on room-temperature superconductivity by  Nature  last September . ( Nature ’s news team is independent of its journals team.)

After receiving an e-mail last year expressing concern about possible data fabrication in Dias’s  PRL  paper — a study, not about room-temperature superconductivity, but about the electrical properties of manganese disulfide (MnS 2 ) — the journal commissioned an investigation by four independent referees.  Nature ’s news team has obtained documents about the investigation, including e-mails and three reports of its outcome, from sources who have asked to remain anonymous. “The findings back up the allegations of data fabrication/falsification convincingly,”  PRL ’s editors wrote in an e-mail obtained by  Nature . Jessica Thomas, an executive editor at the American Physical Society, which publishes  PRL , declined to comment.

On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing . By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

As part of the investigation, co-author Ashkan Salamat, a physicist at the University of Nevada, Las Vegas, and a long-time collaborator of Dias, supplied what he claimed was raw data used to create figures in the  PRL  paper. But all four investigators found that the data Salamat provided did not match the figures in the paper. Two of the referees wrote in their report that, the conclusions of their investigation “paint a very disturbing picture of apparent data fabrication followed by an attempt to hide or coverup [sic] the fact. We urge immediate retraction of the paper”.

Documents show that  PRL  agreed with the findings of the investigation, describing Salamat’s submission of “so-called raw data” as “what appears to be a deliberate attempt to obstruct the investigation”.

Salamat did not respond to multiple requests from  Nature  for comment by the time this story published. Dias responded to  Nature ’s requests for comment in a statement sent by a spokesperson. In it, he denies any misconduct and makes clear his commitment to room-temperature superconductivity research. “We remain certain that there has been no data fabrication, data manipulation or any other scientific misconduct in connection with our work,” the statement says. “Despite this setback, we remain enthusiastic about continuing our work.”

Heated debate

When Dias and his collaborators published a paper in  Nature  in October 2020 reporting that they had created a superconductor that worked at about 15 ºC under extreme pressure greater than one million atmospheres,  they immediately made headlines . Most superconductors operate only at frigid temperatures below 200 kelvin (−73.15 ºC). Other researchers could not reproduce the results, and last year,  Nature  retracted the article . The retraction did not mention misconduct. Karl Ziemelis, chief physical sciences editor at the journal, explains that “data-processing irregularities” were discovered as a result of an investigation. “We lost confidence in the paper as a whole and duly retracted it. Our broader investigation of that work ceased at that point,” he says.

Earlier this year, Dias and his colleagues made an even more stunning claim, once again in  Nature : a new material made of lutetium, hydrogen and nitrogen (Lu-H-N) could stay superconducting at room temperature and relatively low pressures. Finding a material that is a superconductor under ambient conditions has long been a goal of physicists: applications of an ambient superconductor include energy-efficient computer chips and powerful magnets for magnetic resonance imaging (MRI) machines. But because of the 2022  Nature  retraction — and now the impending one in  PRL  — many physicists have been eyeing the Lu-H-N results with suspicion too.

Peter Armitage, a physicist at Johns Hopkins University in Baltimore, Maryland, who has been monitoring the controversy, says: “I just cannot see how we can trust anything [from Dias and Salamat] at this point.”

Asked about community trust in Dias’s research published by  Nature , Ziemelis explains that each manuscript is evaluated independently, suggesting that the 2022 retraction had no bearing on the consideration of the paper published this year. “Our editors make decisions [about accepting manuscripts] based solely on whether research meets our criteria for publication,” he says. “If concerns are raised with us, we will always investigate them carefully.”

Allegations emerge

Issues with data in the 2021  PRL  paper came to light late last year because James Hamlin, a physicist at the University of Florida in Gainesville, had noticed that text from his own 2007 PhD thesis appeared in Dias’s 2013 thesis. This prompted Hamlin to closely examine Dias’s work.

Scrolling through figures from Dias’s thesis, and comparing them with figures in recent papers by Dias, Hamlin noticed that a plot of the electrical resistance for the material germanium tetraselenide (GeSe 4 ), discussed in Dias’s thesis, closely matched a plot of the resistance for MnS 2 , presented in the  PRL  paper. Both plots had an extremely similar curve, especially at low temperatures, he says (see ‘Odd similarity’). “It just seemed very hard to imagine that this could all be a coincidence.”

Electrical resistance to temperature graphs.

Credit: Nature

On 27 October 2022, Hamlin passed his concerns to  PRL  and all the authors of the paper. One of them, Simon Kimber, a physicist then at the University of Burgundy Franche-Comté in France, was immediately concerned and requested a retraction. “The moment I saw the comment, I knew something was wrong,” Kimber told  Nature . “There is no physical explanation for the similarities between the data sets.” None of the other authors, besides Dias, responded to  Nature ’s requests for comment.

PRL  asked the authors for a response to the concerns Hamlin had pointed out. Documents  Nature  obtained clarify what happened next. On 24 February this year, Salamat replied, defending the integrity of the data and claiming that other materials also exhibited similar behaviour. Kimber was unconvinced, however, and on 5 March, he wrote a reply to Salamat, noting that one feature in the GeSe 4  plot, a dip in electrical resistance around 45 kelvin, seemed to be the result of a measurement glitch. The same dip appeared in the MnS 2  plot, which should be impossible if the two were data from separate measurements.

Days later,  PRL   confirmed it was investigating the paper , and on 20 March, applied an ‘ expression of concern ’ to it online.

Investigating the data

After analysing the data, two of the four investigating referees concluded that the “only explanation of the similarity” in the GeSe 4  and MnS 2  plots is that data were taken from Dias’s 2013 thesis and used in the 2021  PRL  paper. Another of the referees bolstered this conclusion in their own report by demonstrating how the alleged fabrication could have happened: the referee found a simple mathematical function that could be applied to the GeSe 4  data to map it onto the MnS 2  data (see ‘Curve matching’).

Electrical resistance to temperature curve matching graph.

Nature  discovered the identity of this anonymous referee, and reached out to them. “When you actually see the close agreement between the transformed GeSe 4  dataset and the purported MnS 2  data, it seems highly unlikely that this could be coincidental,” the referee told  Nature .

For David Muller, a physicist at Cornell University in Ithaca, New York, the circumstances surrounding Dias’s retractions and thesis reminds him of a series of retractions made two decades ago, after researcher Jan Hendrik Schön at Bell Labs in Murray Hill, New Jersey, was  discovered to have falsified data . In Schön’s case, and in his own experience, Muller says, “people who fake data tend not to do it just once”.

Disclosure: The author of this story is related to Robert Garisto, the managing editor of  PRL . The two have had no contact about this story. Documents obtained by  Nature  show that Samindranath Mitra was the acting managing editor for  PRL ’s investigation.

This article is reproduced with permission and was first published on July 25, 2023.

National Academies Press: OpenBook

Fostering Integrity in Research (2017)

Chapter: 4 context and definitions, 4 context and definitions.

In the end, a commitment to the ethical standard of truthfulness, through an understanding of its meaning to science, is essential to enhance objectivity and diminish bias. Unfortunately, the ethos of concern for scientific misconduct continues to dominate the research-ethics movement. This focus is damaging because it turns the attention to seeking and finding wrong-doers and determining punishment rather than discussing generic issues of doing the right thing, preventing harms, seeking benefits, and understanding the right-making and wrong-making characteristics of actions. The focus on scientific misconduct makes ethical issues appear synonymous with legal issues and the search for ethical understanding synonymous with carrying out an investigation.

— S. J. Reiser (1993)

Synopsis: Integrity is essential to the functioning of the research enterprise and personally important to the vast majority of those who dedicate their lives to science. Yet research misconduct and detrimental research practices are facts of life. They must be understood and addressed. This chapter begins with a brief historical overview of misconduct in science, followed by a discussion of definitions and categories that the committee recommends for use by the research enterprise going forward. This framework retains many key aspects of the 1992 committee’s work but suggests several changes.

HISTORICAL CONTEXT

Prominent cases of research misconduct have been uncovered regularly over the time that science has existed as an organized activity. The Piltdown Man hoax of the early 20th century is perhaps the most famous of numerous archaeological hoaxes and frauds that have continued up to recent times. In 2000, amateur archaeologist Shinichi Fujimura was found to have “discovered” artifacts that he had placed in older strata than where they had actually been found. Other fields, such as evolutionary biology, are also represented. Fraudulent work in the first half of the 20th century by Paul Kammerer and Trofim Lysenko purported to prove environmentally acquired inheritance. Questions have even been raised about the integrity of work by revered scientists from the past ( Broad and Wade, 1983 ; Goodstein, 2010 ).

According to the report Responsible Science ( NAS-NAE-IOM, 1992 ), “until [recently] scientists, research institutions, and government agencies relied solely on a system of self-regulation based on shared ethical principles and generally accepted research practices to ensure integrity in the research process.” As discussed in Chapter 2 , science and research have not had defined mechanisms for certification, licensure, and imposing penalties for unethical behavior that have developed in professions such as medicine, law, and some areas of professional engineering. Behaviors such as fabrication of research results and plagiarism might be punished by employers but were generally not subject to legal action, at least in the United States. 1

Unethical behavior in research first emerged as a policy issue in connection with the treatment of human research subjects and laboratory animals. While ethical concerns about human subjects were first raised earlier, it was the Nazi and Japanese military experiments on prisoners during World War II that led to the development of formal international codes. The Tuskegee syphilis study by the U.S. Public Health Service (PHS) that was launched in the 1930s, but only became subject to publicity and critical examination in 1972, provided impetus for policy changes. Policies to protect human subjects and laboratory animals were adopted in the United States during the 1960s and 1970s.

A series of cases in which researchers fabricated data or plagiarized the work of others garnered considerable publicity and prompted congressional hearings in 1981 ( Medawar, 1996 ; Rennie and Gunsalus, 2001 ; Steneck, 1994 ). Conflict-of-interest questions also began arising in this period, related to the effects of researchers benefiting from studies by being awarded stock and other rewards. Due in part to the growth of the research enterprise and the steady increase in federal funding for research, these high-profile cases of fabrication or plagiarism in publicly funded studies were seen as examples of defrauding taxpayers and resulted in congressional attention. Federal agencies began to develop policies on research misconduct during the 1980s. During the late 1980s and early 1990s, cases of alleged immunology data falsification and fabrication against pathologist Thereza Imanishi-Kari of Tufts University (a collaborator of Nobel Prize winner David Baltimore) and data falsification allegations against Mikulas Popovic and Robert Gallo at the National Institutes of Health attracted significant attention from Congress and the news media ( Gold, 1993 ; Kaiser, 1997 ; Kevles, 1998 ). After lengthy, complicated, and controversial investigations and adjudication processes, none of the accused in these cases was found to have committed research misconduct. However, these cases provided an important impetus for federal agencies—the Department of Health and Human Services and the National Science Foundation (NSF) in particular—to regularize how allegations of research misconduct would be investigated and adjudicated by specifying the responsi-

___________________

1 The contexts where data fabrication is subject to criminal prosecution in the United States are discussed in Chapter 7 .

bilities of research institutions, the practices that constitute misconduct and are subject to corrective action, and the oversight roles of the agencies themselves.

These cases had a significant impact on the development of federal and institutional approaches to addressing misconduct. The evolution of these approaches is summarized in Table 4-1 . Current approaches to addressing research misconduct and detrimental research practices are described in detail in Chapter 7 .

WHY IS A FRAMEWORK OF CONCEPTS AND DEFINITIONS OF KEY TERMS NEEDED?

Chapter 2 explored the values underlying research and the behaviors that express those values. As behaviors that violate those values, such as data fabrication, emerged as serious problems, researchers and policy makers sought to develop a framework of concepts and definitions to use in preventing, investigating, taking corrective action, and otherwise addressing those behaviors. The

TABLE 4-1 Research Integrity Policy Time Line

SOURCES: ORI, 2011 ; OSTP, 2000 .

remainder of this chapter reviews concepts and definitions of behaviors that violate the values of research, the evolution of definitions underlying U.S. federal policies, and alternatives that are used by some U.S. institutions as well as by governments and research institutions outside the United States. Rationales for different approaches are explored, and this committee’s recommended framework is presented and explained.

Some issues affecting the advantages and disadvantages of alternative approaches only become clear when considering how concepts and definitions related to violations of research integrity are actually understood and utilized in specific contexts, such as institutional investigations of alleged misconduct that are overseen by federal agencies. Issues arising from implementation of these concepts and definitions are covered in Chapter 7 .

In order to develop policies and implementing mechanisms that define how and under what circumstances research institutions are to be answerable to the federal government for the research-related behaviors of their employees, it is

necessary for those behaviors to be identified. It is in this context that the definitions of research misconduct and other terms have policy implications. These concepts and definitions also have a broader significance to the research enterprise and its stakeholders, since fostering high-quality research that advances knowledge requires identifying and preventing behaviors that violate the values of research ( IAC-IAP, 2012 ).

The 1992 report Responsible Science put forward a framework of terms to describe and categorize behaviors that depart from scientific integrity ( NAS-NAE-IOM, 1992 ). This framework was developed around the terms misconduct in science , questionable research practices , and other misconduct . One of the tasks of this committee was to examine this framework and make recommendations about whether and how it should be updated. The goal is to describe a framework of terms and definitions that is appropriate for today’s environment and that advances efforts to foster research integrity.

The sources or causes of actions that violate the values of research suggest different potential responses or approaches to preventing and addressing them. If the action arises from ignorance, education and mentoring may be the most appropriate responses. If the action arises from perverse incentives in the research enterprise, the removal or mitigation of those incentives may be warranted. If the action is criminal or violates the requirements of employment contracts or research grants, then appropriate penalties or other corrective actions would apply.

However, human actions often cannot be neatly ascribed to a single one of these causes. Rather, a given action can be multiply determined and therefore call for a multifaceted response. Furthermore, the causes of research misconduct and other actions that violate the values of research generally do not all lie within the individual. The social and institutional context of research, ranging from the atmosphere within a given research group to the national governance of research systems, creates incentives and disincentives for particular actions. These issues are explored in more detail in Chapter 6 .

RESEARCH MISCONDUCT

Developing a workable definition of research misconduct requires grappling with several issues. First, actions covered by the definition should represent significant departures from research values and related norms, whether these are field-specific or more global, and also be committed with the intent to mislead or deceive.

In addition, the definition of research misconduct should have clear and logically supportable boundaries. The actions included should be distinguished from transgressions that may occur on the part of researchers, and perhaps in the context of doing research, but which are better addressed by other frameworks. This will partly depend on what those other frameworks are, meaning that a definition of research misconduct appropriate in a given country might not be

appropriate elsewhere. For example, while the United States has separate policies and regulations for dealing with accusations of fabrication of data, protecting human research subjects, and ensuring humane treatment of laboratory animals, in some countries these issues are covered by a unified regulatory framework.

Also, as will be discussed further below, research institutions themselves may choose to adopt definitions of research misconduct for the purposes of their own internal management and employment policies that are broader than the definition adopted by the federal government. In the discussion below, the appropriateness or suitability of research misconduct definitions is considered primarily from the standpoint of U.S. federal policy.

The 1992 Responsible Science report defined misconduct in science as “fabrication, falsification, or plagiarism in proposing, performing, or reporting research” ( NAS-NAE-IOM, 1992 ). It added that misconduct in science does not include errors of judgment; errors in the recording, selection, or analysis of data; differences in opinions involving the interpretation of data; or misconduct unrelated to the research process. Further, failure in scientific research is to be expected, since exploration entails risks. Projects or studies that fall short of hopes and expectations are not a sufficient basis for identifying misconduct.

Since 1992 the definition of misconduct in science as fabrication, falsification, or plagiarism (FFP) has become a central feature of U.S. institutional and governmental approaches to addressing breaches of scientific integrity. In 2000 the term research misconduct was adopted by the Office of Science and Technology Policy (OSTP) in the Executive Office of the President as part of its Federal Policy on Research Misconduct and was defined as FFP:

I. Research Misconduct Defined

Research misconduct is defined as fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results.

Fabrication is making up data or results and recording or reporting them.

Falsification is manipulating research materials, equipment, or processes, or changing or omitting data or results such that the research is not accurately represented in the research record.

Plagiarism is the appropriation of another person’s ideas, processes, results, or words without giving appropriate credit.

Research misconduct does not include honest error or differences of opinion. ( OSTP, 2000 )

Alternative Definitions and Non-FFP Elements

The adoption of FFP as the definition of research misconduct by OSTP came about as part of a lengthy, contentious process. Alternative definitions were developed, considered, and debated over a period of years. At the same time and

up until today, other countries have confronted similar issues and have reached a variety of conclusions. Exploring these approaches is useful in understanding the relative advantages of the FFP-only definition of research misconduct and possible alternatives.

It is noteworthy that all of the alternative definitions of research misconduct that the committee is aware of—past or present, recommended or implemented—include fabrication, falsification, and plagiarism. The differences all emerge from the question of whether other behaviors should be included as well.

Other Serious Deviations

Prior to the adoption of the unified federal definition of research misconduct in 2000, the U.S. Public Health Service (which oversees research supported and performed by the National Institutes of Health) defined misconduct in science as “falsification, fabrication, plagiarism, or other practices that seriously deviate from those that are commonly accepted within the research enterprise for proposing, conducting, or reporting research” ( Rennie and Gunsalus, 1993 ). The definition specified that misconduct “does not include honest error or honest difference in interpretations or judgments of data” ( Price, 2013 ). The National Science Foundation’s definition included FFP and “other serious deviations from accepted practices in proposing, carrying out, or reporting research results from activities funded by NSF” ( Price, 2013 ). NSF’s definition also included “retaliation of any kind against a person who reported or provided information about suspected or alleged misconduct and who has not acted in bad faith.”

Both the PHS and NSF definitions allowed room to consider offenses other than FFP as research misconduct. Much of the research enterprise, including research universities and the associations representing them, opposed the inclusion of elements other than FFP in federal definitions, particularly the “other serious deviations” clause. For example, Responsible Science states that “the vagueness of this category has led to confusion about which actions constitute misconduct in science” ( NAS-NAE-IOM, 1992 ). Concerns have also been raised that the clause would open the door to penalizing innovative approaches to research that could potentially yield significant advances.

A concrete illustration of the disagreement over “other serious deviations” arose when the Office of the Inspector General at the National Science Foundation (NSF-OIG) used the clause to launch a misconduct investigation against an investigator who “was accused of a range of coercive sexual offenses against various female undergraduate students and teaching assistants, up to and including rape” while on research trips to foreign countries led by the investigator ( Buzzelli, 1993 ). While Office of Inspector General officials asserted that the case supported the need for the “other serious deviations” clause, one prominent scientist argued that the case represented “a preposterous and appalling application of the definition of scientific misconduct” ( Schachman, 1993 ).

The “other serious deviations” clause remained in the two primary federal research misconduct definitions for a number of years following this case. During that time, there do not appear to have been additional cases in which its application was controversial, or any evidence that innovative research approaches were discouraged as a result, suggesting that there is cause to be skeptical about some of the arguments made against the clause. At the same time, it is not clear that the “other serious deviations” clause has been particularly missed in the years since. In the discussion below and in Chapter 7 , the specific elements that might be covered by the “other serious deviations” clause are explored in order to see whether there are research behaviors that might not be adequately investigated or subject to corrective action under current policies, and if so, whether changing the federal research misconduct policy is the best way to accomplish this. Denmark’s experience with the Lomborg case and its aftermath, in which a controversial finding of “scientific dishonesty” was later overturned (discussed later in this chapter), serves as an additional cautionary example of what can occur when governments and institutions utilize a broad, nonspecific definition of research misconduct.

On the basis of current knowledge, it appears that the “other serious deviations” clause and similar formulations may not have the adverse impacts on research that some have feared, but they may introduce the risk that a controversial or mishandled case could lead to turmoil and a loss of credibility on the part of the institutions and agencies charged with addressing research misconduct.

The Ryan Commission

In 1995, the Commission on Research Integrity was organized by Congress to “advise the Secretary of Health and Human Services and Congress about ways to improve the Public Health Service (PHS) response to misconduct in biomedical and behavioral research receiving PHS funding.” Known as the Ryan Commission after its chairman, Harvard professor Kenneth Ryan, it released a report on misconduct in research and treatment of good-faith whistleblowers ( Commission on Research Integrity, 1995 ).

The report articulated the interest of the federal government in the integrity of research it funded and concluded that the definition of misconduct should be based on the “fundamental principle that scientists be truthful and fair in the conduct of research and the dissemination of research results.” The commission defined its driving concern as “What is in the best interest of the public and science?” Its work aimed to provide “vital guidance for personal and ethical judgments and decisions concerning the professional behavior of scientists.”

The commission recommended broadening the definition of misconduct beyond FFP to encompass misappropriation, interference, and misrepresentation:

1. Research Misconduct

Research misconduct is significant misbehavior that improperly appropriates the intellectual property or contributions of others, that intentionally impedes

the progress of research, or that risks corrupting the scientific record or compromising the integrity of scientific practices. Such behaviors are unethical and unacceptable in proposing, conducting, or reporting research, or in reviewing the proposals or research reports of others.

Examples of research misconduct include but are not limited to the following:

Misappropriation: An investigator or reviewer shall not intentionally or recklessly

  • plagiarize, which shall be understood to mean the presentation of the documented words or ideas of another as his or her own, without attribution appropriate for the medium of presentation; or
  • make use of any information in breach of any duty of confidentiality associated with the review of any manuscript or grant application.

Interference: An investigator or reviewer shall not intentionally and without authorization take or sequester or materially damage any research-related property of another, including without limitation the apparatus, reagents, biological materials, writings, data, hardware, software, or any other substance or device used or produced in the conduct of research.

Misrepresentation: An investigator or reviewer shall not with intent to deceive, or in reckless disregard for the truth,

  • state or present a material or significant falsehood; or
  • omit a fact so that what is stated or presented as a whole states or presents a material or significant falsehood. ( Commission on Research Integrity, 1995 )

The commission based its recommendation to include “interference” as an element of misconduct based on testimony it received about cases where researchers sabotaged the experiments of others or absconded with vital data, arguing that existing laws against vandalism were often not adequate to address these situations. It also recommended defining other forms of “professional misconduct” as obstruction of investigations of research misconduct and repeated noncompliance with research regulations after notice. Finally, the commission made several recommendations concerning the conduct and oversight of investigations, including a “Whistleblower’s Bill of Rights.”

The Ryan Commission’s proposed misappropriation, interference, and misrepresentation definition of research misconduct was opposed by some members of the research enterprise, including the leadership of the Federation of American Societies for Experimental Biology and the National Research Council of the National Academy of Sciences. The criticisms of the definition focused on two issues. 2 First, the definition took the form of “leading principles with examples,” which was characterized as “vague and open-ended” ( Alberts et al., 1996 ). The commission’s report had itself argued that fabrication, falsification, and plagia-

2 Chapter 7 will discuss issues raised by some of the other Ryan Commission recommendations.

rism as understood in the agency policies in effect at that time were “neither narrow nor precise” ( Commission on Research Integrity, 1995 ). Second, regarding the examples themselves, the concern was raised that the inclusion of omitting facts as an example of misrepresentation could open the door to regarding omissions or mistakes in citation as misconduct ( Glazer, 1997 ). While many of the commission’s recommendations later were incorporated into governmental regulatory approaches, its approach to the definition of research misconduct was abandoned.

Alternative Research Misconduct Definitions Used by U.S. Research Institutions and Private Sponsors

While U.S. research institutions must apply the federal research misconduct definition to federally supported work, they are free to adopt definitions of research misconduct that include behaviors other than FFP. A recent analysis found that more than half of 189 universities studied “had research misconduct policies that went beyond the federal standard” ( Resnik et al., 2015 ). The most common non-FFP element was “other serious deviations,” with more than 45 percent of institutions including it. Other misconduct elements adopted by at least 10 percent of institutions were “significant or material violations of regulations,” “misuse of confidential information,” “misconduct related to misconduct,” “unethical authorship other than plagiarism,” “other deception involving data manipulation,” and “misappropriation of property/theft” ( Resnik et al., 2015 ). Institutional investigations of non-FFP misconduct are not reported to federal agencies or reviewed by them. Most of the policies that went beyond FFP were adopted after 2001, and a higher proportion of institutions in the lowest quartile of research funding adopted such policies than those in the upper quartiles.

Nonfederal research sponsors may also adopt research misconduct definitions different from those of the federal government. For example, the Howard Hughes Medical Institute’s policy, adopted in 2007, defines research misconduct as FFP and “any other serious deviations or significant departures from accepted and professional research practices, such as the abuse or mistreatment of human or animal research subjects” ( HHMI, 2007 ).

Non-U.S. Examples

Policy approaches to fostering research integrity vary widely around the world, and the same variety can be seen in how research misconduct is defined (or not defined). A recent survey of research misconduct policies around the world found that 22 of the top 40 R&D performing countries have national policies, and several more are in the process of developing a policy ( Resnik et al., 2015 ). All of the countries that have policies include FFP in their definitions, with many including additional elements such as unethical authorship and publication practices,

other serious deviations, and violation of regulations protecting human research subjects or laboratory animals ( Resnik et al., 2015 ). The following examples illustrate the choices other countries have made, which are relevant to the question of how U.S. definitions and policies operate in a global context.

Research Councils UK, the organization of the United Kingdom’s government-funding agencies, has a lengthy and detailed definition of “unacceptable conduct”:

Unacceptable conduct includes each of the following:

Fabrication

This comprises the creation of false data or other aspects of research, including documentation and participant consent.

Falsification

This comprises the inappropriate manipulation and/or selection of data, imagery and/or consents.

This comprises the misappropriation or use of others’ ideas, intellectual property or work (written or otherwise), without acknowledgement or permission.

Misrepresentation, including:

  • misrepresentation of data, for example suppression of relevant findings and/or data, or knowingly, recklessly or by gross negligence, presenting a flawed interpretation of data;
  • undisclosed duplication of publication, including undisclosed duplicate submission of manuscripts for publication;
  • misrepresentation of interests, including failure to declare material interests either of the researcher or of the funders of the research;
  • misrepresentation of qualifications and/or experience, including claiming or implying qualifications or experience which are not held;
  • misrepresentation of involvement, such as inappropriate claims to authorship and/or attribution of work where there has been no significant contribution, or the denial of authorship where an author has made a significant contribution.

Breach of duty of care , whether deliberately, recklessly or by gross negligence:

  • disclosing improperly the identity of individuals or groups involved in research without their consent, or other breach of confidentiality;
  • placing any of those involved in research in danger, whether as subjects, participants or associated individuals, without their prior consent, and without appropriate safeguards even with consent; this includes reputational danger where that can be anticipated;
  • not taking all reasonable care to ensure that the risks and dangers, the broad objectives and the sponsors of the research are known to participants or their legal representatives, to ensure appropriate informed consent is obtained properly, explicitly and transparently;
  • not observing legal and reasonable ethical requirements or obligations of care for animal subjects, human organs or tissue used in research, or for the protection of the environment;
  • improper conduct in peer review of research proposals or results (including manuscripts submitted for publication); this includes failure to disclose conflicts of interest; inadequate disclosure of clearly limited competence; misappropriation of the content of material; and breach of confidentiality or abuse of material provided in confidence for peer review purposes.

Improper dealing with allegations of misconduct

  • Failing to address possible infringements including attempts to cover up misconduct or reprisals against whistle-blowers
  • Failing to deal appropriately with malicious allegations, which should be handled formally as breaches of good conduct. ( RCUK, 2013 )

Another example is Denmark, whose approach has evolved over time. The first Danish Committees on Scientific Dishonesty (DCSD) was established by the Danish Medical Research Council in 1992, with additional committees being added in 1998 so as to cover all of science ( Resnik and Master, 2013 ). At first, the DCSD employed a broad definition of scientific dishonesty based on “actions or omissions in research which give rise to falsification or distortion of the scientific message or gross misrepresentation of a person’s involvement in the research,” ( DCSD, 2015 ) with nine specific elements, including FFP, as well as “consciously distorted reproduction of others’ results” and “inappropriate credit as the author or authors” ( DCSD, 2002 ).

However, in 2003, the Danish Committees on Scientific Dishonesty investigated allegations of scientific dishonesty made against Bjørn Lomborg, whose book The Skeptical Environmentalist challenged the view that global environmental problems are worsening. Its finding that Lomborg had committed scientific dishonesty was controversial and was ultimately overturned by Denmark’s Ministry of Science, Technology and Innovation, which cited insufficient evidence and arguments and an overly broad definition of scientific dishonesty ( Resnik and Master, 2013 ). Several years later, Denmark’s definition of scientific dishonesty was narrowed to ( DCSD, 2014 ):

The term ”scientific dishonesty” (research misconduct) is defined as: falsification, fabrication, plagiarism and other serious violations of good scientific practice committed intentionally or due to gross negligence during the planning, implementation or reporting of research results.

There have been several international efforts to foster research integrity at the regional or global levels. For example, the European Code of Conduct for Research Integrity puts forward a definition that includes FFP as well as:

failure to meet clear ethical and legal requirements such as misrepresentation of interests, breach of confidentiality, lack of informed consent and abuse of research subjects or materials. Misconduct also includes improper dealing with infringements, such as attempts to cover up misconduct and reprisals on whistleblowers. ( ESF-ALLEA, 2011 )

In finding that a researcher has committed misconduct, intention plays a critical role. Fabrication and falsification generally are associated with an intention to deceive. If a researcher produces incorrect results out of negligence or carelessness, the behavior is typically criticized but would not be considered misconduct, since there was no conscious deception. Likewise, plagiarism is often intentional but can also result from sloppy work practices that could be characterized as “reckless.” In addition to stipulating that research misconduct does not include “honest error,” the federal research misconduct policy includes the provision that the behavior must be “committed intentionally, or knowingly, or recklessly” in order for a finding of misconduct to be warranted ( OSTP, 2000 ).

Dresser (1993) has pointed out that terms such as “intentional” and “fraudulent” are too broad and poorly defined to be useful in determining the culpability of researchers and in establishing penalties and other corrective steps for a given action. She pointed instead to the 1962 publication of the Model Penal Code, which sought to replace “eighty or so” culpability terms previously found in state and federal criminal codes with four culpable mental state provisions ( American Law Institute, 1985 ). Individuals act “purposely” if their “conscious object” is to engage in proscribed conduct. They act “knowingly” if they are aware of a high probability that they are engaging in such conduct. They act “recklessly” if they are aware of and “consciously disregard” a substantial risk that they are engaging in prohibited conduct. And they act “negligently” if they should be aware of a substantial risk that they are engaging in prohibited conduct. The first three terms are “subjective” culpability in which an individual has some level of personal awareness of engaging in prohibited behavior.

Distinguishing “honest error” from deception can be very difficult, yet it is important for those charged with investigating an allegation to try to do so. A classic example that illustrates this is the “cold fusion” episode of 1989 involving Martin Fleischmann and B. Stanley Pons of the University of Utah ( Goodstein, 2010 ). While that case involved research behavior that fell far short of good research practices, many observers and experts believe that it did not rise to the level of misconduct. The Fleischmann-Pons case also featured institutional and researcher choices about pursuing press conference science and secrecy to protect

intellectual property instead of publication that remain controversial to this day. Even in the most egregious cases, a researcher may claim extenuating circumstances, negligence, or error rather than admitting culpability. Furthermore, the researcher engaging in the behavior may choose not to examine the motivations behind those acts so as to reduce personal accountability. In such cases, it can be difficult to establish culpability for a given behavior.

The intent to deceive is often difficult to prove. Proof almost always relies on circumstantial evidence, which can, however, include an analysis of the behavior of the person accused of misconduct. One commonly accepted principle, adopted by the Ryan Commission, is that the intent to deceive may be inferred from a person’s acting in reckless disregard for the truth ( Commission on Research Integrity, 1995 ). Providing guidance of this sort for misconduct investigative committees would likely be valuable going forward, given that it is often difficult to establish intent.

Implications of Retaining FFP as the Federal Misconduct Definition and Possible Changes

The above review of the debate over the U.S. research misconduct definition and alternatives past and present reveals examples of non-FFP behaviors that could be included in an amended federal research misconduct definition. Whether they should be or not depends on whether the behavior is adequately addressed under current policies related to research misconduct and other areas and, if not, whether the behavior would be addressed most effectively by including it in the federal research misconduct definition versus other options. For example, some behaviors that are included in non-U.S. definitions of research misconduct—such as violating the rights of human research subjects—are already addressed by a well-developed set of regulations and institutions in the United States (see the discussion of “ other misconduct ” below). Therefore, they will not be considered further in this context. Other behaviors such as sabotaging the experiments of others or retaliating against good-faith whistleblowers are worth examining in light of how the federal policy on research misconduct is actually operating within institutions and with regard to agency oversight. These issues will be discussed in Chapter 7 .

In the meantime, it is worth considering an issue that the committee spent considerable time discussing, that of authorship misrepresentation that might not be clearly included in OSTP’s definition of plagiarism. A footnote in the 1992 report Responsible Science states that “it is possible that some extreme cases of noncontributing authorship may be regarded as misconduct because they constitute a form of falsification” ( NAS-NAE-IOM, 1992 ). Responsible Science also noted that in 1989 a Public Health Service annual report of its activities to address research misconduct included several abuses of authorship in examples of misconduct, such as “preparation and publication of a book chapter listing

co-authors who were unaware of being named as co-authors,” and “engaging in inappropriate authorship practices on a publication and failure to acknowledge that data used in a grant application were developed by another scientist.” It should be noted that this formulation predated the 2000 federal policy on research misconduct and could have included cases considered under the “other serious deviations” provision.

As in the cases of whistleblower retaliation and sabotage, evaluating whether changes in federal policy should be made to better address authorship abuses involves considering the scale of the problem and weighing the advantages and disadvantages of policy changes against other alternatives. This will be covered in Chapter 7 .

DETRIMENTAL RESEARCH PRACTICES

The 1992 Responsible Science report identified an additional set of actions “that violate traditional values of the research enterprise and that may be detrimental to the research process,” but for which “there is at present neither broad agreement as to the seriousness of these actions nor any consensus on standards for behavior in such matters.” As examples of these actions, it cited

failing to retain significant research data for a reasonable period, maintaining inadequate research records, conferring or requesting authorship on the basis of a specialized service or contribution that is not significantly related to the research reported in the paper, refusing to give peers reasonable access to unique research materials or data that support published papers, using inappropriate statistical or other methods of measurement to enhance the significance of research findings, and misrepresenting speculations as fact or releasing preliminary research results, especially in the public media, without providing sufficient data to allow peers to judge the validity of the results or to reproduce the experiments.

Many of the actions the 1992 panel identified as questionable research practices (often labeled QRPs) have gained less institutional consensus, and consequently there is less agreement on policies and incentives to address them. However, this panel has identified some of these practices as not questionable at all but as clear violations of the fundamental tenets of research. As will be covered in detail in Chapter 5 , the past several decades of experience have clarified the damage that these practices are wreaking on the research enterprise, which might surpass the damage that research misconduct causes. Codes of responsible conduct of research in other countries include some of these practices in definitions of research misconduct that are broader than in the United States.

Also, it is important to remember that Responsible Science and other analyses of its time focused on the actions of individual researchers, and that their concepts and definitions were framed accordingly. In light of several decades

of subsequent experience and the massive changes in the scientific landscape detailed in Chapter 3 , it is clear that the organizations that make up the research enterprise, such as research institutions, research sponsors, and journals, may also engage in behaviors that damage research integrity. It is just as necessary to identify and actively discourage these organizational actions and incentives as it is to better address individual behaviors.

This committee believes that many of the practices that up to now have been considered questionable research practices, as well as damaging behaviors by research institutions, sponsors, or journals, should be considered detrimental research practices (DRPs) . Researchers, research institutions, research sponsors, journals, and societies should discourage and in some cases take corrective actions in response to DRPs.

Rather than develop a definitive list and specific corrective actions, the committee seeks to catalyze discussion within the research enterprise on what can be done to more actively discourage DRPs than what has been done up to now. Indeed, the committee’s primary recommended response to DRPs is for all participants in the research enterprise to seek to significantly improve practices. How this may be done is covered in detail in Chapter 9 .

These are examples of DRPs that the committee has considered and agrees on:

  • Detrimental authorship practices that may not be considered misconduct, such as honorary authorship, demanding authorship in return for access to previously collected data or materials, or denying authorship to those who deserve to be designated as authors;
  • Not retaining or making data, code, or other information/materials underlying research results available as specified in institutional or sponsor policies, or standard practices in the field;
  • Neglectful or exploitative supervision in research;
  • Misleading statistical analysis that falls short of falsification;
  • Inadequate institutional policies, procedures, or capacity to foster research integrity and address research misconduct allegations, and deficient implementation of policies and procedures; and
  • Abusive or irresponsible publication practices by journal editors and peer reviewers.

Further discussion of DRPs, how and why they are harmful, and how they should be discouraged are topics explored in Chapter 5 .

OTHER MISCONDUCT

In addition to research misconduct and questionable research practices, Responsible Science identified a category of unacceptable behaviors that the panel termed other misconduct . These behaviors are not unique to the conduct of

research even when they occur in a research environment. Such behaviors include “sexual and other forms of harassment of individuals; misuse of funds; gross negligence by persons in their professional activities; vandalism, including tampering with research experiments or instrumentation; and violations of government research regulations, such as those dealing with radioactive materials, recombinant DNA research, and the use of human or animal subjects.”

Because such actions are not unique to the research process, they do not constitute research misconduct, the panel said. They should, therefore, be addressed in other ways, such as the legal system, employment actions, or other mechanisms that address violations of professional standards. However, the panel added that some forms of other misconduct are directly associated with research misconduct, including “cover-ups of misconduct in science, reprisals against whistle-blowers, malicious allegations of misconduct in science, and violations of due process protections in handling complaints of misconduct in science.” As a result, these forms of other misconduct “may require action and special administrative procedures” ( NAS-NAE-IOM, 1992 ).

As discussed above, whistleblower retaliation and tampering/sabotage will be explored further in Chapter 7 . Otherwise, this committee agrees that the category of other misconduct should remain as it was recommended in Responsible Science .

This page intentionally left blank.

The integrity of knowledge that emerges from research is based on individual and collective adherence to core values of objectivity, honesty, openness, fairness, accountability, and stewardship. Integrity in science means that the organizations in which research is conducted encourage those involved to exemplify these values in every step of the research process. Understanding the dynamics that support – or distort – practices that uphold the integrity of research by all participants ensures that the research enterprise advances knowledge.

The 1992 report Responsible Science: Ensuring the Integrity of the Research Process evaluated issues related to scientific responsibility and the conduct of research. It provided a valuable service in describing and analyzing a very complicated set of issues, and has served as a crucial basis for thinking about research integrity for more than two decades. However, as experience has accumulated with various forms of research misconduct, detrimental research practices, and other forms of misconduct, as subsequent empirical research has revealed more about the nature of scientific misconduct, and because technological and social changes have altered the environment in which science is conducted, it is clear that the framework established more than two decades ago needs to be updated.

Responsible Science served as a valuable benchmark to set the context for this most recent analysis and to help guide the committee's thought process. Fostering Integrity in Research identifies best practices in research and recommends practical options for discouraging and addressing research misconduct and detrimental research practices.

Welcome to OpenBook!

You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

Do you want to take a quick tour of the OpenBook's features?

Show this book's table of contents , where you can jump to any chapter by name.

...or use these buttons to go back to the previous chapter or skip to the next one.

Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

Switch between the Original Pages , where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

To search the entire text of this book, type in your search term here and press Enter .

Share a link to this book page on your preferred social network or via email.

View our suggested citation for this chapter.

Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

Get Email Updates

Do you enjoy reading reports from the Academies online for free ? Sign up for email notifications and we'll let you know about new publications in your areas of interest when they're released.

APS Physics

  • Collections

thesis on scientific misconduct

  • APS Journals

Allegations of Scientific Misconduct Mount as Physicist Makes His Biggest Claim Yet

Figure caption

If Ranga Dias of the University of Rochester, New York, and his team have observed room-temperature (294 K), near-ambient pressure superconductivity [ 1 ], their discovery could rank among the greatest scientific advances of the 21st century (see Research News: Muted Response to New Claim of a Room-Temperature Superconductor ). Such a breakthrough would mark a significant step toward a future where room-temperature superconductors transform the power grid, computer processors, and diagnostic tools in medicine.

But for the past three years, the Rochester team—and Dias in particular—has been shrouded in allegations of scientific misconduct after other researchers raised questions about their 2020 claim of room-temperature superconductivity [ 2 ]. In September, the Nature paper reporting that result was retracted, as documented in Science and For Better Science . Further misconduct allegations against Dias have recently emerged, with researchers alleging that Dias plagiarized substantial portions of someone else’s doctoral thesis when writing his own and that he misrepresented his thesis data in a 2021 paper in Physical Review Letters ( PRL ) [ 3 ]. Jessica Thomas, Executive Editor of the Physical Review journals, confirmed that PRL has launched an investigation into that accusation. “This is a pretty serious allegation,” she says. “We are not taking it lightly.”

To understand those allegations, Physics Magazine independently examined Dias’ thesis and spoke with more than a dozen experts in high-temperature superconductivity, including Dias. Although opinions differ, an overwhelming majority agree that some form of misconduct has likely occurred. Dias denies the accusations. “I really do see all this as a scientific debate,” he says. “So even though these are meaningless, baseless claims, I really do think that these are adding to advancing the science.” He insists that the data for both of his room-temperature-superconductivity claims are robust and valid.

The First Room-Temperature Superconductor?

A superconductor is a material whose electrons travel with zero resistance. The first known superconductors could only remain in a superconducting state up to about 25 K. In the late 1980s, researchers found the first so-called high-temperature superconductors, which superconducted up to 90 K—a temperature achievable with liquid nitrogen. Scientists thought they were on the cusp of a room-temperature-superconductor revolution. But, as of now, none of the high-temperature superconductors used in those early experiments (mostly copper oxides) has been shown to maintain its superconductivity above about 160 K, which is below the coldest temperature recorded in Antarctica.

There is another predicted path to high-temperature superconductivity. Models indicate that under enormous pressure, hydrogen can transform into a metal that can superconduct at hundreds of kelvins [ 4 ]. Several groups of researchers, including Dias and his postdoctoral advisor Isaac Silvera of Harvard University, claim to have made metallic hydrogen in the lab [ 5 ], but conclusive evidence for the existence of the state remains elusive. Researchers have had more luck creating metallic hydrogen alloys that solidify at lower pressures. In 2015, a team from Germany reported superconductivity in hydrogen sulfide (H 3 S) at 203 K and 155 GPa [ 6 ]. This demonstration was followed four years later by reports of superconducting lanthanum hydride (LaH 10 ) at 250 K and 170 GPa [ 7 ]. The first room-temperature superconductor appeared to be within reach.

Figure caption

On October 14, 2020, Dias and his colleagues announced in the journal Nature that they had discovered superconductivity in carbonaceous sulfur hydride (CSH), a hydrogen-containing material, at 287 K and 267 GPa—the first room-temperature superconductor [ 2 ]. The cover of Nature playfully described the results as “turning up the heat,” and initial reactions from other scientists were largely positive. “I called Dias and congratulated him,” says Mikhail Eremets of the Max Planck Institute for Chemistry, Germany, who led the team that reported the 2015 H 3 S result.

But not everyone was impressed. Among the unimpressed was Jorge Hirsch, a condensed-matter theorist at the University of California, San Diego, and a self-identified skeptic of high-temperature superconductivity in hydrogen-rich compounds. When the CSH result was published, Hirsch immediately checked the paper for flaws and soon focused on measurements of magnetic susceptibility, a property that describes the effect of a magnetic field on a material. Like electrical resistance, magnetic susceptibility should drop sharply when the material enters the superconducting state—a key test for superconductivity. Then it should flatten or very slowly rise as the temperature lowers further. To Hirsch, the shape of one of the magnetic susceptibility plots in the Nature paper (specifically, the inset image of “Extended Data Figure 7d”) seemed strange, because the slope at lower temperatures exhibited a sharp jump up. This puzzling piece of data was the first question that led to many more.

Data Manipulation in Europium

The search to understand the 2020 CSH data led Hirsch to inspect a 2009 result published in PRL that reported superconductivity in europium [ 8 ]. Because europium and CSH behave very differently (europium superconducts only up to 2.75 K), Hirsch was surprised to find that the two materials seemed to have similarly shaped magnetic susceptibility plots. Looking at the author contributions of both papers, Hirsch noted that both sets of magnetic susceptibility measurements were made by the same person, Matthew Debessai, who worked at Intel Corporation as of 2021. (Intel did not respond to a request for information about Debessai’s current status with the company.) “[The data] looked superficially similar…but it wasn’t like they were duplicates,” says James Hamlin, a high-pressure experimentalist at the University of Florida, and an author of the europium paper.

Figure caption

Struck by these commonalities, in November of 2020, Hirsch emailed Debessai, requesting the data. Debessai did not cooperate, so Hirsch contacted the paper’s other authors to get the information he wanted. Another coauthor found it on an old computer and handed it over in July 2021. “I cracked open the data thinking, ‘Alright, I'm going to prove to Jorge that there’s nothing wrong with this data,’” Hamlin says. He says that instead, he found “one issue after another,” including a section of the magnetic susceptibility data that appeared to have been copied and pasted from one temperature range to another. PRL was alerted, and the europium paper was retracted on December 23, 2021. A repeat of the europium experiment by another of Hamlin’s coauthors found no superconductivity.

When Hamlin uncovered the europium data issues, he set up a meeting with Dias and Ashkan Salamat, a physicist at the University of Nevada, Las Vegas, and an author of the CSH paper. “I said, ‘Look, there are problems with the europium data. The data has been manipulated, and you need to look at your CSH susceptibility data,’” Hamlin recalls. But, according to Hamlin, Dias and Salamat seemed unconcerned about the possible misconduct. They seemed more worried that the news of the europium data fabrication would go viral, he says.

Failure to Replicate CSH

While Hirsch looked into the published CSH data, others tried to replicate the CSH results. The description in the paper of how to synthesize CSH was “scarce but still sufficient,” says Alexander Goncharov, a materials scientist at the Carnegie Institute in Washington, DC. He thought the replication would be doable. Goncharov and his team synthesized CSH but only through a modified procedure that used a different material in one of the steps (methane was substituted for pure carbon) [ 9 ]. Eremets, too, attempted to reproduce Dias’ results, but after six months of work, he says he gave up. To date, no unaffiliated experiment has corroborated Dias’ synthesis, let alone observed superconductivity in CSH.

Both Eremets and Goncharov contacted Dias for guidance in synthesizing CSH, but they say that they were given no help. Eremets says that he is used to more cooperation in these matters. When he announced his discovery of superconductivity in H 3 S, he gave Paul Chu, a high-temperature superconductivity expert at the University of Houston, immediate access to his lab in response to Chu’s request.

“It’s a tricky synthetic procedure, and it often doesn’t work,” says Russell Hemley, a condensed-matter physicist at the University of Illinois Chicago. Hemley collaborated with Dias on a recent CSH experiment [ 10 ]. “You have to get the initial pressures just right and use the right laser power and so on,” he says. “The fact that Eremets and Goncharov haven’t been successful doesn’t tell me very much, except that it’s tricky.”

Figure caption

Theorists have also had difficulty modeling the CSH results. In the past two and a half years, despite rigorous theoretical searches, nobody has found a single structure that contains carbon, sulfur, and hydrogen, and that superconducts at the same temperature as CSH, says Lilia Boeri, a condensed-matter theorist at the University of Rome [ 11 ]. By comparison, other hydrides have been easily simulated and their superconducting transition temperatures calculated to within 5% of their experimental values. Room-temperature superconducting CSH “should not exist,” Boeri says. Hemley argues that a simplified calculation called a virtual crystal approximation can account for CSH’s properties [ 12 ]. Boeri, however, says that the approximation applies only when the compound contains elements that are neighbors in the periodic table, which carbon and sulfur are not.

Back to the Data

For over a year, Dias refused to provide the CSH data files. He claimed that pending patents prevented him from sharing them. “I thought that excuse was baloney,” Hamlin says. “It’s just voltage versus temperature data.” Then on December 25, 2021—two days after the europium paper was retracted—Dias and Salamat changed their minds and published the complete dataset for the magnetic susceptibility measurements [ 13 ]. Hirsch began working on the data with Dirk van der Marel, a condensed-matter physicist at the University of Geneva. They quickly spotted something odd.

To obtain the CSH magnetic susceptibility data—a piece of critical evidence for superconductivity—Dias and his colleagues wrote in the Nature paper that they made two independent voltage measurements: the “raw” signal from the superconducting CSH sample and a background signal from a nonsuperconducting CSH sample. They then subtracted the background from the raw signal to get the “clean” signal.

When a signal is measured, it contains some noise in the form of random fluctuations in the data. Independently measured signals will have independent noise and subtracting one such signal from another should lead to a clean signal with at least that much noise, the opposite of what Dias and his colleagues presented. Hirsch and van der Marel concluded that the data were manipulated [ 14 ]. In response, Dias and Salamat said that they hadn’t measured the background signal, as they had claimed in the Nature paper. Rather they had “constructed” it. Hirsch and van der Marel’s argument about lower noise fell flat. The two were stuck until a Reddit comment gave van der Marel an idea that works even if the background is constructed.

The trick is to understand relationships between the noise in different signals. To explain, van der Marel analogizes the CSH dataset to a family. In a typical family, the mother and father are not genetically related to each other, but both parents are genetically related to their child. Similarly, the noise of the background (the father) and of the raw signal (the mother) should be correlated only with the clean signal (the child) and not with each other. But for the released CSH data, that isn’t the case. The noise of the background and raw signals are correlated, as are the noise of the raw signal and the final, clean one. There is no correlation between the noise of the final signal and the background. In family terms, the mother is genetically related to the father, but the father is not related to the child. This “peculiar” relationship indicates that the clean signal was not obtained the way Dias and colleagues claim, van der Marel says.

Figure caption

On September 26, 2022, the CSH paper was retracted. “It was a nonstandard method, and we hadn’t disclosed it. So that was the reason to retract,” Dias says. “[ Nature ] hasn’t questioned the validity of our data…the data is valid.” But a “nonstandard method” does not explain the data relationships, Hamlin says. “On the other hand, it’s very easy to understand how you could go the other direction, how you could take the published data and add something to it to get to the raw data.” Additionally, in an analysis published after the retraction, Hamlin also found issues in the electrical resistivity data. He found that some of the data points are separated by discrete steps, while others by smooth slopes. While digitization creates discrete steps, it does not create smooth curves between them [ 15 ].

Asked if Hamlin would see a similar pattern in the electrical resistivity data of other hydride superconductors, Dias answered affirmatively. But the step-like patterns have not been seen in the measurements of other such materials, and unlike magnetic susceptibility data, the electrical resistivity measurements have no background that could introduce them.

The University of Rochester has conducted two internal inquiries into the CSH data-manipulation allegations. According to a spokesperson, both inquiries “determined that there was no evidence that supported the concerns.” But the university has not made the remit of the investigations public and has not provided any rationale for the investigations or details on how they reached their conclusions.

Allegations of Plagiarism

While Hamlin was digging into the CSH data, he came across familiar-looking sentences in the paper that presented the raw CSH data—lines that he recollected writing in his 2007 PhD thesis. On a hunch, he pulled up Dias’ 2013 thesis and fed both his thesis and Dias’ into a plagiarism checker. His computer screen lit up; the two documents contained numerous identical passages. Physics Magazine independently compared the two theses and found dozens of paragraphs that match word for word and two figures that have striking similarities.

In response to the allegations that he plagiarized Hamlin’s thesis, Dias says he has done nothing wrong: “I have appropriate citations.” Washington State University, which awarded Dias his PhD declined to comment on whether they have carried out a misconduct investigation. A statement from the University of Rochester says, “Dr. Dias has taken responsibility for these errors and is working with his thesis advisor…to amend the thesis.”

Hamlin separately found a match between a resistivity plot for germanium selenide (GeSe 4 ) in Dias’ thesis and one in a 2021 PRL paper on manganese sulfide (MnS 2 ) [ 3 ]. He emailed Dias and the rest of the paper’s authors with his finding. Simon Kimber, a coauthor of the PRL paper, says that on receiving the email, he “could not think of a piece of chemistry or physics that would explain the similarity.” Kimber emailed PRL to request a retraction. PRL has launched an investigation into the paper.

Figure caption

The mounting allegations and various paper retractions have led to a situation where Dias’ condensed-matter colleagues are wary of his scientific claims. “Still, I don’t want to believe [the allegations] because it’s too serious,” Eremets says. He would prefer the field simply forget about the irreproducible CSH result and move on. Others are less forgiving. “I think there are these various concerns out there that need to be addressed before the community should accept any further claims,” Hamlin says.

This week’s room-temperature superconductor claim for nitrogen-doped lutetium hydride (NLH) has excited experts who think Dias has committed no misconduct. For example, Hemley, who was not involved in this new study, but is a collaborator of Dias, calls it “an important breakthrough.” He suggests that the nitrogen component of the material may stabilize it, increasing its superconducting transition temperature, as it does for some other hydrides. Boeri, on the other hand, says that she is deeply skeptical about the finding. The behavior of NLH “seems different from everything we know.”

For van der Marel, the new paper’s biggest issue is the way it handles the 2020 CSH result. Dias and his colleagues favorably cite the retracted paper and its retraction notice when they describe their background subtraction technique—the one at the center of the 2020 CSH misconduct allegation. “I also don’t understand Nature . Why did they let that happen?” he asks.

Disclosure: This article does not reflect the views of the American Physical Society (the publisher of Physics Magazine ) or of the Physical Review journals. The writer of this story—Dan Garisto—had no communication about this story with his father, Robert Garisto, the Managing Editor of PRL .

Correction (13 March 2023): An earlier version of the article stated that Steven Manly was part of the internal inquiries, on the basis of notes another source took after a phone call with a University of Rochester representative on 9 May 2022. Manly denies any involvement with the inquiries. “It is not an evaluation I have done, nor have I been asked to do one,” he says. As such, we have removed mention of his name from the article.

–Dan Garisto

Dan Garisto is a freelance science writer based in New York.

  • N. Dasenbrock-Gammon et al. , “Evidence of near-ambient superconductivity in a N-doped lutetium hydride,” Nature 615 , 244 (2023) .
  • E. Snider et al. , “RETRACTED ARTICLE: Room-temperature superconductivity in a carbonaceous sulfur hydride,” Nature 586 , 373 (2020) .
  • D. Durkee et al. , “Colossal density-driven resistance response in the negative charge transfer insulator MnS 2 ,” Phys. Rev. Lett. 127 , 016401 (2021) .
  • N. W. Ashcroft, “Metallic hydrogen: A high-temperature superconductor?” Phys. Rev. Lett. 21 , 1748 (1968) .
  • I. F. Silvera and R. Dias, “Metallic hydrogen,” J. Phys.: Condens. Matter 30 , 254003 (2018) .
  • A. P. Drozdov et al. , “Conventional superconductivity at 203 kelvin at high pressures in the sulfur hydride system,” Nature 525 , 73 (2015) .
  • A. P. Drozdov et al. , “Superconductivity at 250 K in lanthanum hydride under high pressures,” Nature 569 , 528 (2019) ; M. Somayazulu et al. , “Evidence for superconductivity above 260 K in lanthanum superhydride at megabar pressures,” Phys. Rev. Lett. 122 , 027001 (2019) .
  • M. Debessai et al. , “Retraction: Pressure-induced superconducting state of europium metal at low temperatures [Phys. Rev. Lett. 102 , 197002 (2009)],” Phys. Rev. Lett. 127 , 269902 (2021) .
  • E. Bykova et al. , “Structure and composition of C-S-H compounds up to 143 GPa,” Phys. Rev. B 103 , L140105 (2021) .
  • H. Pasan et al. , “Observation of conventional near room remperature superconductivity in carbonaceous sulfur hydride,” (2023) arXiv:2302.08622 .
  • M. Gubler et al. , “Missing theoretical evidence for conventional room-temperature superconductivity in low-enthalpy structures of carbonaceous sulfur hydrides,” Phys. Rev. Materials 6 , 014801 (2022) .
  • Y. Ge et al. , “Hole-doped room-temperature superconductivity in H 3 S 1−x Z x (Z = C, Si),” Mater. Today Phys. 15 , 100330 (2020) .
  • R. P. Dias and A. Salamat, “Standard superconductivity in carbonaceous sulfur hydride,” (2021) arXiv:2111.15017v2 .
  • D. van der Marel and J. E. Hirsch, “Extended comment on Nature 586, 373 (2020) by E. Snider et al,” (2022) arXiv:2201.07686v7 .
  • J. J. Hamlin, “Vector graphics extraction and analysis of electrical resistance data in Nature volume 586, pages 373–377 (2020),” (2022) arXiv:2210.10766v1 .

Subject Areas

Recent articles.

Magnetic Vortex Rings on Demand

Magnetic Vortex Rings on Demand

Scientists have devised a promising method for generating and manipulating exotic spin patterns called magnetic vortex rings, which could have applications in energy-efficient data storage and processing. Read More »

Another Twist in the Understanding of Moiré Materials

Another Twist in the Understanding of Moiré Materials

The unexpected observation of an aligned spin polarization in certain twisted semiconductor bilayers calls for improved models of these systems. Read More »

Zero-Resistance State for a Potential High-Temperature Superconducting Nickelate

Zero-Resistance State for a Potential High-Temperature Superconducting Nickelate

Researchers have measured a zero-resistance state for the nickelate La 3 Ni 2 O 7 , which measurements suggest may superconduct at temperatures above the boiling point of liquid nitrogen. Read More »

Sign up to receive weekly email alerts from Physics Magazine .

Scientific Misconduct: A Global Concern

  • Published: 05 September 2018
  • Volume 68 , pages 331–335, ( 2018 )

Cite this article

  • Suvarna Satish Khadilkar 1  

6530 Accesses

8 Citations

1 Altmetric

Explore all metrics

In today’s world, evil appears to be all pervading. Medical publication is no exception. Scientific misconduct in medical writing is slowly becoming a global concern, especially over the last few decades. While the occurrence of such events is certainly rare, every researcher and reader should be aware of this entity. The researcher should ensure that no inadvertent error is construed as misconduct, and should take every effort to guard against it, and the reader should have a critical eye for the same. This article looks into various aspects of scientific misconduct and encourages awareness regarding the same.

Avoid common mistakes on your manuscript.

Introduction

Do correct. For, He watcheth

Scientific writing continues to be plagued by “tweaks” even in the modern times [ 1 ]. The pressures of publishing for a successful medical career seem to be the root cause for this behavior, add to it other influences and incentives!

Scientific integrity in scholarly writing is very important in the medical field as it directly translates into management of our patients. Without high standards of scientific integrity, the scientific community and general public may be adversely impacted as a result of recommendations emerging out of “inferior science.” Over last several years, incidents of misconduct are increasingly being reported worldwide. Even apex journals are not spared. It is really sad that false evidence for personal gains continues to muddle science, and hence, each one of us has to introspect and follow the best ethical behavior. As most clinicians do not have a direct access to the “insider information,” the impact of inappropriate conduct is probably immense and not accurately measurable. In this context, a statement by the ex-editor of a highly reputed journal is an eye opener to the magnitude of the problem [ 2 ].

What is Ethics?

Ethics is defined as correct behavior dictated internally by one’s own moral integrity. Guidelines are the norms for correct behavior laid down but not forced, and law is the correct behavior governed externally or enforced by state. If these are not followed, defaulters may be pulled up for such untoward behavior and even face consequences, blots that no doctor would want in his/her career.

Components of Ethical Research

Scientific integrity, cordial mentor–trainee relationship, appropriate data acquisition, management, sharing of data and clarity on ownership are the main components of ethical research. Appropriate procedures must be followed for research involving human and animal subjects. Collaborative science formalities, declaration of conflict of interest, commitment of all contributors, appropriate peer review process, good publication practices and responsible authorship are other essential components of ethical research.

Any research activity must be approved by an institutional ethical committee. Informed consent from participants is mandatory. Substandard research or non-compliance may constitute “misconduct.”

What is Scientific Misconduct?

“Research misconduct is defined as fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results” [ 3 ]. Another definition of research misconduct is given as, “as any behavior by a researcher, whether intentional or not, that fails to scrupulously respect high scientific and ethical standards.” Various aspects of scientific misconduct are now well recognized and clear definitions are in place [ 4 ]. It should be acknowledged that any profession with intelligent individuals can have varying opinions and scientific misconduct does not include honest errors or differences of opinion.

Unethical Practices in Science and Publication

Sometimes, entire research may be based on research fraud or untrue patient data. Selective or inaccurate publications, plagiarism, or intellectual theft, redundant publication, undeclared conflict of interest, inappropriate authorship, inappropriate acknowledgements, premature public statements are all examples of unethical practices. In the current day context, performing unethical research and publishing without ethical committee permission are indefensible.

Why Do Authors Engage Into Unethical Practices?

There are several reasons of these practices. One or many may be in action. Even though some may happen out of ignorance, many are done intentionally. In either case, ignorance is unjustifiable in the modern scientific era. Authors aspiring higher positions are under continuous pressure to publish. This pressure may be exerted by their institutes, peers or seniors. Sometimes, overzealous ambition drives individuals down this path. Career prospects and fierce competition may also be a motivation for such actions. Sometimes, lack of publications may lead to loss of incentives such as promotion, or even a possible termination of jobs. In short, they are the victims of “Publish or Perish” system.

Some may not have any knowledge on the ethics or nuances of scholarly writing, so they simply follow what their colleagues have done before. Unfortunately, if the senior colleagues had themselves engaged in unacceptable and unethical practices, it is imperative the juniors may follow suit. It is the bounden moral responsibility of seniors and teachers to lead them on the right path.

Financial incentives from particular sources can also be motivating factors. For example, an author, otherwise competent to do good research, may write biased articles to prove the merits of a particular drug manufactured by a particular company, upon financial inducement by the particular drug manufacturer. Some are failures and not capable of doing good work so they engage into misconduct to prove themselves.

At What Stages Can Misconduct Occur?

Misconduct can be at the level of planning, wherein ideas may be borrowed in a wrong way. It could be at the level of application of permission for conducting the research. Moreover, it could be that research is being done without appropriate permissions. There could be misconduct in the way of collection of data, with deviations from approved protocols. It should be stated here that mere obtaining of permission is not enough, but the protocols approved by the institutional ethics committee should be followed. There could be misconduct in the management of the data. The collection, storing and transmission can lead to many possibilities. Inadequate measures to ensure authenticity of data can itself construed to be misconduct. Data obtained from the subjects should be converted to analysis with good amount of fidelity. Inability to ensure can be viewed upon as misconduct. Hence, it is easy to acknowledge that the ways in which misconduct can occur are multiple, and all researchers must actively guard against it.

Types of Misconduct

Let us understand various forms of scientific misdeeds and misconducts. Plagiarism is a serious type of offense and will be dealt with separately in the last editorial of the series on medical writing in JOGI. It can happen intentionally or unintentionally. The misconduct can happen at any stage of research. It could happen at the time of planning, or could happen during practical application, and during the process of scientific publication.

Fabrication means making up data or results and recording or reporting them. It may be noted that even just intent of publishing fabricated data is a misconduct. It is simply reporting something which does not exist.

Falsification means manipulating research materials, equipment, or processes, or changing or omitting data or results such that the research is not accurately represented in the research record. Example of falsification is manipulation of blood pressure readings in a drug trial evaluating hypertension in pregnancy.

Obfuscation means omission of critical data or results. Reporting positive outcomes and omitting adverse outcomes are types of obfuscation.

Plagiarism is the appropriation of another person’s ideas, processes, results, or words, without giving appropriate credit. It is an intellectual theft and is a serious offense. This will be dealt with separately in the next issue.

Unethical research is starting research without obtaining ethical committee permission.

Conflict of interest can be defined as “a set of conditions in which professional judgment concerning a primary interest (such as patients’ welfare or the validity of research) tends to be unduly influenced by a secondary interest (such as financial gain)” [ 5 ].

If authors make active contribution to research without declaring conflict of interest or declaring no conflicts while actually having conflict of interest will amount to misconduct. If author is employee of a company and conducts research on the products of the company without declaring the conflict, the general public may be misguided with the biased study. And patient management will be also misdirected. Authors as well as contributors, editors, reviewers, and publishers should declare conflict of interests.

Informed consent must be obtained from all participants of the study. Failing to do so amounts to scientific misconduct.

Irresponsible authorship is the commonest area of allegation of misconduct.

Before understanding what is irresponsible authorship, we need to understand what is “appropriate authorship.” International consensus exists on the same. As per International Committee of Medical Journal Editors (ICMJE) Guidelines, authorship credit should be based only on [ 6 ].

Substantial contributions to the conception or design of the work; or the acquisition, analysis, or interpretation of data for the work; and

Drafting the work or revising it critically for important intellectual content; and

Final approval of the version to be published; and

Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

The keyword in this definition is the “and”. All of the above conditions must be met with. To reiterate, merely being a part of one of the parts of the publication does not justify authorship.

Inappropriate Authorship

Simply providing cases, offering authorship to someone not having substantial contribution to the conduct of the research and publication process, omitting authors’ names despite their significant contribution, taking credit for the publication but unwillingness to take public responsibility for the data in case of any dispute or litigation, are examples of irresponsible authorship. Most publications insist on a descriptive note on the specific role(s) of each author/contributor to prevent such problems. Some peculiar forms of inappropriate authorship are considered below.

Ghost Writing

Hiring an author to write an article that is officially credited to another person as author. One typical example is a senior author asking a subordinate to write an article, with credit going only to the senior author, and not the junior. Not writing any part of the article, but paying someone to write it is considered ghost writing.

Guest Authorship

Intentional inclusion of name of a reputed senior author who has not contributed to the research or publication. This is done to increase the chances of acceptance by journal editors. Offering authorship to close relatives without having any role to play in the making of the paper is the example of guest authorship.

Acknowledgements

The individuals who do not meet authorship criteria, but who have assisted the research by their encouragement and advice or by providing space, financial support, reagents, routine analyses, or patient material should be acknowledged in the text. Performance of routine work or duty funding/sponsoring agency, being head of an institute/department, does not qualify for authorship, unless they fulfill the above-mentioned criteria. Hence, if need be they should be only acknowledged.

Publication bias and misconduct

Positive trials are more likely to be submitted and published quickly. Some trials which did not have positive results may never be published [ 7 ].

Suppression—the failure to publish significant findings due to the results being adverse to the interests of the researcher or his/her sponsor(s).

Publication of deliberately false or misleading research.

Bare assertions—making entirely unsubstantiated claims and making baseless and unverifiable conclusions entirely unrelated to data.

Quoting fake references, and/or quote which do not support the argument.

Premature publication: In the current generation, news travels fast. Exposing a fact in one mass media implies that it can reach the entire world. However, this should not corrode due to scientific processes. It is advisable that dissemination of study results through various media should take place only after publication, or at least simultaneously.

Apart from ICMJE, other institutions have also come up with recommendations and guidelines.

ICMR Guidelines The Indian Council of Medical Research has published an exhaustive article titled “National Ethical Guidelines for Biomedical and Health Research Involving Human Participants.” One section of this guideline is titled “Responsible conduct of Research.” Under this section, the topics “Reviewing and reporting research,” “Responsible authorship and publication” and “Research misconduct and policies for handling misconduct” are covered in detail. It is hereby suggested that everyone be conversant with these guidelines [ 8 ].

Committee on Publication Ethics (COPE) In the recent few decades, a substantial spurt in the occurrence of publication misconduct prompted many editors of prominent journals to come together and form an entity named COPE. The motto of COPE states “Promoting integrity in research and its publication.” What started off as an experimental initiative, now meets and publishes guidelines regarding publication ethics. Cases (of misconduct) and findings are discussed, and best practices for authors and editors are published. It is now seen as a standard in scientific publishing, and our authors and readers to will benefit by referring to the detailed publications of COPE [ 9 ].

Similarly, World Association of Medical Editors ( WAME) is another organization which is a forum where editors of many medical journals have come together to frame guidelines regarding standardization of publication practices [ 10 ].

To summarize, integrity is the soul of research. Ethical research and sound publication ethics are a necessary continuum in the process of upliftment of medical knowledge. Researchers, authors, editors, reviewers, publishers and sponsors should be aware and fulfill their respective responsibilities. Good ethical research is a step toward scientific evolution which will help us achieve final goal “women’s Health.”

Deshmukh MA, Dodamani AS, Khairnar MR, et al. Research misconduct: a neglected plague. Indian J Public Health. 2017;61(1):33–6.

Article   PubMed   Google Scholar  

The Ethical Nag. https://ethicalnag.org/2009/11/09/nejm-editor/ . Accessed 18 Aug 2018.

Federal Research Misconduct Policy. https://ori.hhs.gov/federal-research-misconduct-policy . Accessed 18 Aug 2018.

Smith R. Research misconduct: the poisoning of the well. J R Soc Med. 2006;99(5):232–7.

Article   PubMed   PubMed Central   Google Scholar  

Greco D, Diniz NM. Conflicts of interest in research involving human beings. J Int Bioeth. 2008;19(1–2):143–54.

Article   Google Scholar  

Defining the Role of Authors and Contributors. http://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html . Accessed 18 Aug 2018.

Brænd AM, Straand J, Jakobsen RB, et al. Publication and non-publication of drug trial results: a 10-year cohort of trials in Norwegian general practice. BMJ Open. 2016;6(4):e010535.

National Ethical Guidelines for Biomedical and Health Research Involving Human Participants. http://thsti.res.in/pdf/ICMR_Ethical_Guidelines_2017.pdf . Accessed 18 Aug 2018.

Committee On Publication Ethics. https://publicationethics.org/ . Accessed 18 Aug 2018.

World Association of Medical editors. http://www.wame.org/index.php . Accessed 18 Aug 2018.

Download references

Author information

Authors and affiliations.

Consultant Gyne-Endocrinologist, Bombay Hospital and Medical Research Centre, 12 New Marine Line, Mumbai, 400020, India

Suvarna Satish Khadilkar

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Suvarna Satish Khadilkar .

Additional information

Prof. Suvarna Satish Khadilkar MD DGO FICOG, CIMP, Diploma in Endocrinology (UK) is editor in chief of Journal of Obstetrics and Gynecology of India, and Treasurer, FOGSI; she is Consultant Gyne-Endocrinologist, Bombay Hospital and Medical Research Centre, Mumbai, Former Professor and Head, Dept of Ob-Gyn, RCSM, Government Medical College, Maharashtra and Asso. Prof. and Unit Chief Grant Medical College and Cama and Albless Hospital, Mumbai.

Rights and permissions

Reprints and permissions

About this article

Khadilkar, S.S. Scientific Misconduct: A Global Concern. J Obstet Gynecol India 68 , 331–335 (2018). https://doi.org/10.1007/s13224-018-1175-8

Download citation

Received : 21 August 2018

Accepted : 21 August 2018

Published : 05 September 2018

Issue Date : October 2018

DOI : https://doi.org/10.1007/s13224-018-1175-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Find a journal
  • Publish with us
  • Track your research

research fraud cartoon

May 9, 2019

In Fraud We Trust: Top 5 Cases of Misconduct in University Research

There’s a thin line between madness and immorality. This idea of the “mad scientist” has taken on a charming, even glorified perception in popular culture. From the campy portrayal of Nikola Tesla in the first issue of Superman, to Dr. Frankenstein, to Dr. Emmet Brown of Back to the Future, there’s no question Hollywood has softened the idea of the mad scientist. So, I will not paint the scientists involved in these five cases of research fraud as such. The immoral actions of these researchers didn’t just affect their own lives, but also the lives and careers of innocent students, patients, and colleagues. Academic fraud is not only a crime, it is a threat to the intellectual integrity upon which the evolution of knowledge rests. It also compromises the integrity of the institution, as any institution will take a blow to their reputation for allowing academic misconduct to go unnoticed under its watch. Here, you will find the top five most notorious cases of fraud in university research in only the last few years

Fraud in Psychology Research

In 2011, a Dutch psychologist named Diederik Stapel committed academic fraud in a number of publications over the course of ten years, spanning three different universities: the University of Groningen, the University of Amsterdam, and Tilburg University.

Among the dozens of studies in question, most notably, he falsified data on a study which analyzed racial stereotyping and the effects of advertisements on personal identity. The journal Science published the study, which claimed that one particular race stereotyped and discriminated against another particular race in a chaotic, messy environment, versus an organized, structured one. Stapel produced another study which claimed that the average person determined employment applicants to be more competent if they had a male voice. As a result, both studies were found to be contaminated with false, manipulated data.

Psychologists discovered Stapel’s falsified work and reported that his work did not stand up to scrutiny. Moreover, they concluded that Stapel took advantage of a loose system, under which researchers were able to work in almost total secrecy and very lightly maneuver data to reach their conclusions with little fear of being contested. A host of newspapers published Stapel’s research all over the world. He even oversaw and administered over a dozen doctoral theses; all of which have been rendered invalid, thereby compromising the integrity of former students’ degrees.

“I have failed as a scientist and a researcher. I feel ashamed for it and have great regret,” lamented Stapel to the New York Times. You can read the particulars of this fraud case here .

Duke University Cancer Research Fraud

In 2010, Dr. Anil Potti left Duke University after allegations of research fraud surfaced. The fraud came in waves. First, Dr. Potti flagrantly lied about being a Rhodes Scholar to attain hundreds of thousands of dollars in grant money from the American Cancer Society. Then, Dr. Potti was caught outright falsifying data in his research, after he discovered one of his theories for personalized cancer treatment was disproven. This theory was intended to justify clinical trials for over a hundred patients. Because it was disproven, the trials could no longer take place. Dr. Potti falsified data in order to continue with these trials and attain further funding.

Over a dozen papers that he published were retracted from various medical journals, including the New England Journal of Medicine.

Dr. Potti had been working on personalized cancer treatment he hailed as “the holy grail of cancer.” There are a lot of people whose bodies fail to respond to more traditional cancer treatments. Personalized treatments, however, offer hope because patients are exposed to treatments that are tailored to their own unique body constitution, and the type of tumors they have. Because of this, patients flocked to Duke to register for trials for these drugs. They were even told there was an 80% chance that they would find the right drug for them. The patients who partook in these trials filed a lawsuit against Duke, alleging that the institution performed ill-performed chemotherapy on participants. Patients were so excited that there was renewed hope for their cancer treatment, that they trusted Dr. Potti’s trials and drugs. Sadly, many of these cancer patients suffered from unusual side effects like blood clots and damaged joints.

Duke settled these lawsuits with the families of the patients. You can read details of the case here .

Plagiarism in Kansas

Mahesh Visvanathan and Gerald Lushington, two computer scientists from the University of Kansas, confessed to accusations of plagiarism. They copied large chunks of their research from the works of other scientists in their field. The plagiarism was so ubiquitous that even the summary statement of their presentation was lifted from another scientist’s article in a renowned journal.

Visvanathan and Lushington oversaw a program at the University of Kansas in which researchers reviewed and processed large amounts of data for DNA analysis. In this case, Visvanathan committed the plagiarism and Lushington knowingly refrained from reporting it to the university. Learn more about this case here .

Columbia University Research Misconduct

The year was 2010. Bengü Sezen was finally caught falsifying data after ten years of continuously committing fraud. Her fraudulent activity was so blatant that she even made up fake people and organizations in an effort to support her research results. Sezen was found guilty of committing over 20 acts of research misconduct, with about ten research papers recalled for redaction due to plagiarism and outright fabrication.

Sezen’s doctoral thesis was fabricated entirely in order to produce her desired results. Additionally, her misconduct greatly affected the careers of other young scientists who worked with her. These scientists dedicated a large portion of their graduate careers trying to reproduce Sezen’s desired results.

Columbia University moved to retract her Ph.D in chemistry. Sezen fled the country during her investigation.   Read further details about this case here .

Penn State Fraud

In 2012, Craig Grimes ripped off the U.S. government to the tune of $3 million. He pleaded guilty to wire fraud, money laundering, and engaging in fraudulent statements to attain grant money.

Grimes bamboozled the National Institute of Health (NIH) and the National Science Foundation (NSF) into granting him $1.2 million for research on gases in blood, which helps detect disorders in infants. Sadly, it was revealed by the Attorney’s Office that Grimes never carried out this research, and instead used the majority of his granted funds for personal expenditures. In addition to that $1.2 million, Grimes also falsified information that helped him attain $1.9 million in grant money via the American Recovery and Reinvestment Act. Consequently, a federal judge ruled that Grimes spend 41 months in prison and pay back over $660,000 to Penn State, the NIH, and the NSF.

Check out the details about this case here .

Share this:

Latest articles, toxic labs and research misconduct, digital persistent identifiers and you, navigating the fly america act, featured articles.

thesis on scientific misconduct

November 21, 2023

researcher taking a ticket

November 16, 2023

United States airplane with researchers

September 25, 2023

thesis on scientific misconduct

Community Blog

Keep up-to-date on postgraduate related issues with our quick reads written by students, postdocs, professors and industry leaders.

What is Scientific Misconduct?

Qamar Mayyasah

  • By Qamar Mayyasah
  • June 5, 2020

What is Scientific Misconduct?

Scientific misconduct can be described as a deviation from the accepted standards of scientific research, study and publication ethics. There can be many forms of scientific misconduct such as plagiarism, misconduct involving experimental techniques, and fraud. Another example may be when the results of a scientific investigation are reported without giving credit to the principal investigators whose work has been involved.

One area that is particularly challenging to address is that between science and transparency. Everyone acknowledges the importance of transparency in the conduct of scientific research. The problem is that people often abuse this importance by practicing scientific misconduct that involves dishonesty or unethical behaviour. One example is scientific fraud, where authors create an article with fabricated images or data, which is then submitted to a peer-reviewed publication without approval from an independent oversight board.

There are a number of different guidelines and rules that are applied to the conduct of scientific research. The main set of guidelines is generally referred to as the APA (American Psychological Association) statement on scientific honesty and integrity. This statement outlines what a professional should do when they suspect that another person or group is engaged in fraudulent activity. The first rule of thumb is that a scientist must provide a formal written assurance of what they are doing when investigating a topic. This includes an explanation of their methods, data and observations. This assurance is called a “VA”, or statement of responsibility.

Another guideline is for a researcher to ensure that they follow the principles of peer review and replication. For example, the published paper should include a reference to the original researchers, and a statement in the Methodology section that says, “In this manuscript, we acknowledge the sources cited herein and make our own analysis based on the data and observations reported in this article.” Another important element is that a scientist and each co-author should sign a Publication Review Statement when submitting an article to a journal. After all, if there is scientific misconduct in a published paper, the publication is not going to be accepted by any of the major journals. Journal editors rightly take this issue very seriously. If there is a question of the validity of a body of work (for example the manipulation of images) then readers or members of the scientific community may submit a formal ‘expression of concern’ about inappropriate behaviour having occurred within this study. It is then for the editor to determine if there is real evidence of misconduct within the scientific publication in question. Depending on the country, this may also involve a research ethics committee or an office of research integrity that investigates the validity and nature of the results presented by the researcher and their co-authors. Other reasons for an expression of concern to be submitted by readers could be based around the nature of any experiments that were performed or if there was strong evidence calling into question the ethical behaviour of researchers performing a clinical trial, for example. There are clear ethical guidelines that apply to all published research involving human subjects.

When a researcher chooses a specific sample or source of data from which to examine a particular topic, this should be noted in the manuscript. If the sample is chosen due to lack of quality, it may be unethical to use that source. If there are multiple samples being used, each should have been checked for reliability.

A number of guidelines also apply to the reporting of results. Reporting research with missing or incorrect data or with conclusions that do not match the sample should not happen. Conflicting results are just one reason that these reports should be correct. Other unethical behaviour includes misrepresenting data, pretending to have a scientific relationship to something, or plagiarizing someone else’s work.

Whistle-blowers often make scientific misconduct accusations against those whose work they are alarmed about. Federal agencies such as the FDA are well aware of these allegations, and the agency will take steps to investigate and determine how the allegations are substantiated. It is also possible that a whistle-blower will make legitimate, ethical criticisms of a research record and make a case to the FDA regarding its fraudulent nature. The agency has a rule called the Whistleblower Protection Rule that makes it possible for people to go forward with their complaints without fear of facing professional legal action.

Although the exact scope of the guidelines on what to do when someone makes an accusation of scientific misconduct is not known, federal agencies and scientific review boards have included ethics as a matter of continuing concern. They recognise it is important for science to be objective, consistent and honest in its work. If those who review the work of scientists believe that the work is being compromised by some unethical actions, that can undermine the value and integrity of the scientific enterprise. If someone makes an accusation of scientific misconduct, it is important to address it immediately so that it can be properly investigated.

What is an Academic Transcript?

An academic transcript gives a breakdown of each module you studied for your degree and the mark that you were awarded.

Unit of Analysis

The unit of analysis refers to the main parameter that you’re investigating in your research project or study.

Tips for working from home as an Academic

Learn about defining your workspace, having a list of daily tasks and using technology to stay connected, all whilst working from home as a research student.

Join thousands of other students and stay up to date with the latest PhD programmes, funding opportunities and advice.

thesis on scientific misconduct

Browse PhDs Now

Preparing for your PhD Viva

If you’re about to sit your PhD viva, make sure you don’t miss out on these 5 great tips to help you prepare.

Scope of Research

The scope of the study is defined at the start of the study. It is used by researchers to set the boundaries and limitations within which the research study will be performed.

Chris Sampson Profile

Chris is making minor corrections to his PhD thesis post-viva at the University of Nottingham. His research was on optimising the cost-effectiveness of risk-based screening for diabetic retinopathy.

Prof Carolyn Mair

Prof Mair gained her PhD in cognitive neuroscience from Bournemouth University in 2004. She is now a consultant working with the fashion industry and published her book in 2018.

Join Thousands of Students

  • Share full article

Advertisement

Supported by

More Studies by Columbia Cancer Researchers Are Retracted

The studies, pulled because of copied data, illustrate the sluggishness of scientific publishers to address serious errors, experts said.

thesis on scientific misconduct

By Benjamin Mueller

Scientists in a prominent cancer lab at Columbia University have now had four studies retracted and a stern note added to a fifth accusing it of “severe abuse of the scientific publishing system,” the latest fallout from research misconduct allegations recently leveled against several leading cancer scientists.

A scientific sleuth in Britain last year uncovered discrepancies in data published by the Columbia lab, including the reuse of photos and other images across different papers. The New York Times reported last month that a medical journal in 2022 had quietly taken down a stomach cancer study by the researchers after an internal inquiry by the journal found ethics violations.

Despite that study’s removal, the researchers — Dr. Sam Yoon, chief of a cancer surgery division at Columbia University’s medical center, and Changhwan Yoon, a more junior biologist there — continued publishing studies with suspicious data. Since 2008, the two scientists have collaborated with other researchers on 26 articles that the sleuth, Sholto David, publicly flagged for misrepresenting experiments’ results.

One of those articles was retracted last month after The Times asked publishers about the allegations. In recent weeks, medical journals have retracted three additional studies, which described new strategies for treating cancers of the stomach, head and neck. Other labs had cited the articles in roughly 90 papers.

A major scientific publisher also appended a blunt note to the article that it had originally taken down without explanation in 2022. “This reuse (and in part, misrepresentation) of data without appropriate attribution represents a severe abuse of the scientific publishing system,” it said .

Still, those measures addressed only a small fraction of the lab’s suspect papers. Experts said the episode illustrated not only the extent of unreliable research by top labs, but also the tendency of scientific publishers to respond slowly, if at all, to significant problems once they are detected. As a result, other labs keep relying on questionable work as they pour federal research money into studies, allowing errors to accumulate in the scientific record.

“For every one paper that is retracted, there are probably 10 that should be,” said Dr. Ivan Oransky, co-founder of Retraction Watch, which keeps a database of 47,000-plus retracted studies. “Journals are not particularly interested in correcting the record.”

Columbia’s medical center declined to comment on allegations facing Dr. Yoon’s lab. It said the two scientists remained at Columbia and the hospital “is fully committed to upholding the highest standards of ethics and to rigorously maintaining the integrity of our research.”

The lab’s web page was recently taken offline. Columbia declined to say why. Neither Dr. Yoon nor Changhwan Yoon could be reached for comment. (They are not related.)

Memorial Sloan Kettering Cancer Center, where the scientists worked when much of the research was done, is investigating their work.

The Columbia scientists’ retractions come amid growing attention to the suspicious data that undergirds some medical research. Since late February, medical journals have retracted seven papers by scientists at Harvard’s Dana-Farber Cancer Institute . That followed investigations into data problems publicized by Dr. David , an independent molecular biologist who looks for irregularities in published images of cells, tumors and mice, sometimes with help from A.I. software.

The spate of misconduct allegations has drawn attention to the pressures on academic scientists — even those, like Dr. Yoon, who also work as doctors — to produce heaps of research.

Strong images of experiments’ results are often needed for those studies. Publishing them helps scientists win prestigious academic appointments and attract federal research grants that can pay dividends for themselves and their universities.

Dr. Yoon, a robotic surgery specialist noted for his treatment of stomach cancers, has helped bring in nearly $5 million in federal research money over his career.

The latest retractions from his lab included articles from 2020 and 2021 that Dr. David said contained glaring irregularities . Their results appeared to include identical images of tumor-stricken mice, despite those mice supposedly having been subjected to different experiments involving separate treatments and types of cancer cells.

The medical journal Cell Death & Disease retracted two of the latest studies, and Oncogene retracted the third. The journals found that the studies had also reused other images, like identical pictures of constellations of cancer cells.

The studies Dr. David flagged as containing image problems were largely overseen by the more senior Dr. Yoon. Changhwan Yoon, an associate research scientist who has worked alongside Dr. Yoon for a decade, was often a first author, which generally designates the scientist who ran the bulk of the experiments.

Kun Huang, a scientist in China who oversaw one of the recently retracted studies, a 2020 paper that did not include the more senior Dr. Yoon, attributed that study’s problematic sections to Changhwan Yoon. Dr. Huang, who made those comments this month on PubPeer, a website where scientists post about studies, did not respond to an email seeking comment.

But the more senior Dr. Yoon has long been made aware of problems in research he published alongside Changhwan Yoon: The two scientists were notified of the removal in January 2022 of their stomach cancer study that was found to have violated ethics guidelines.

Research misconduct is often pinned on the more junior researchers who conduct experiments. Other scientists, though, assign greater responsibility to the senior researchers who run labs and oversee studies, even as they juggle jobs as doctors or administrators.

“The research world’s coming to realize that with great power comes great responsibility and, in fact, you are responsible not just for what one of your direct reports in the lab has done, but for the environment you create,” Dr. Oransky said.

In their latest public retraction notices, medical journals said that they had lost faith in the results and conclusions. Imaging experts said some irregularities identified by Dr. David bore signs of deliberate manipulation, like flipped or rotated images, while others could have been sloppy copy-and-paste errors.

The little-noticed removal by a journal of the stomach cancer study in January 2022 highlighted some scientific publishers’ policy of not disclosing the reasons for withdrawing papers as long as they have not yet formally appeared in print. That study had appeared only online.

Roland Herzog, the editor of the journal Molecular Therapy, said that editors had drafted an explanation that they intended to publish at the time of the article’s removal. But Elsevier, the journal’s parent publisher, advised them that such a note was unnecessary, he said.

Only after the Times article last month did Elsevier agree to explain the article’s removal publicly with the stern note. In an editorial this week , the Molecular Therapy editors said that in the future, they would explain the removal of any articles that had been published only online.

But Elsevier said in a statement that it did not consider online articles “to be the final published articles of record.” As a result, company policy continues to advise that such articles be removed without an explanation when they are found to contain problems. The company said it allowed editors to provide additional information where needed.

Elsevier, which publishes nearly 3,000 journals and generates billions of dollars in annual revenue , has long been criticized for its opaque removals of online articles.

Articles by the Columbia scientists with data discrepancies that remain unaddressed were largely distributed by three major publishers: Elsevier, Springer Nature and the American Association for Cancer Research. Dr. David alerted many journals to the data discrepancies in October.

Each publisher said it was investigating the concerns. Springer Nature said investigations take time because they can involve consulting experts, waiting for author responses and analyzing raw data.

Dr. David has also raised concerns about studies published independently by scientists who collaborated with the Columbia researchers on some of their recently retracted papers. For example, Sandra Ryeom, an associate professor of surgical sciences at Columbia, published an article in 2003 while at Harvard that Dr. David said contained a duplicated image . As of 2021, she was married to the more senior Dr. Yoon, according to a mortgage document from that year.

A medical journal appended a formal notice to the article last week saying “appropriate editorial action will be taken” once data concerns had been resolved. Dr. Ryeom said in a statement that she was working with the paper’s senior author on “correcting the error.”

Columbia has sought to reinforce the importance of sound research practices. Hours after the Times article appeared last month, Dr. Michael Shelanski, the medical school’s senior vice dean for research, sent an email to faculty members titled “Research Fraud Accusations — How to Protect Yourself.” It warned that such allegations, whatever their merits, could take a toll on the university.

“In the months that it can take to investigate an allegation,” Dr. Shelanski wrote, “funding can be suspended, and donors can feel that their trust has been betrayed.”

Benjamin Mueller reports on health and medicine. He was previously a U.K. correspondent in London and a police reporter in New York. More about Benjamin Mueller

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • v.10(5); 2018 May

Logo of cureus

Combating Scientific Misconduct: The Role of Focused Workshops in Changing Attitudes Towards Plagiarism

Farooq a rathore.

1 Department of Rehabilitation Medicine, PNS Shifa Hospital, DHA II, Karachi 75500, Pakistan

Noor E Fatima

2 Department of Medicine, CMH Lahore Medical College and Institute of Dentistry, Shami Road, Lahore Cantt.

Fareeha Farooq

3 Department of Biochemistry, Sir Syed Medical College for Girls, Karachi

Sahibzada N Mansoor

4 Department of Rehabilitation Medicine, Combined Military Hospital, Panoaqil, Pakistan

Introduction

Scientific misconduct is a global issue. There is low awareness among health professionals regarding plagiarism, particularly in developing countries, including Pakistan. There is no formal training in the ethical conduct of research or writing for under- and post-graduate students in the majority of medical schools in Pakistan. Internet access to published literature has made plagiarism easy. The aim of this study was to document the effectiveness of focused workshops on reducing scientific misconduct as measured using a modified version of the attitude towards plagiarism questionnaire (ATPQ) assessment tool.

Materials and methods

A cross-sectional study was conducted with participants of workshops on scientific misconduct. Demographic data were recorded. A modified ATPQ was used as a pre- and post-test for workshop participants. Data were entered in SPSS v20 (IBM< Armonk, NY, US). Frequencies and descriptive statistics were analyzed. An independent sample t-test was run to analyze differences in mean scores on pre-workshop ATPQ and differences in mean scores on post-test scores.

There were 38 males and 42 females (mean age: 26.2 years) who participated in the workshops and completed the pre- and post-assessments. Most (59; 73.75%) were final-year medical students. One-third (33.8%) of the respondents had neither attended workshops related to ethics in medical research nor published manuscripts in medical journals (32.5%). More than half (55%) admitted witnessing unethical practices in research. There was a significant improvement in attitudes toward plagiarism after attending the workshop (mean difference = 7.18 (6.2), t = 10.32, P < .001).

Conclusions

Focused workshops on how to detect and avoid scientific misconduct can help increase knowledge and improve attitudes towards plagiarism, as assessed by the modified ATPQ. Students, residents, and faculty members must be trained to conduct ethical medical research and avoid all forms of scientific misconduct.

Plagiarism is ”the appropriation of another person’s ideas, processes, results, or words without giving appropriate credit” [ 1 ]. This issue has gained worldwide importance, as medical research and writing evolve. Scientific writing has become an essential skill for those in academia and is imperative for the professional growth and development of the field. The culture of “publish or perish” has forced many into the unethical practices of plagiarism [ 2 ]. Scientific misconduct and plagiarism are global issues, plaguing not only developing countries but also technologically advanced nations, and are on the rise [ 3 - 4 ]. This is likely due to an increased awareness of scientific misconduct and the development of better software to detect plagiarism. Formal training in understanding the ethical aspects of medical research and writing at the undergraduate or post-graduate level is lacking in most medical schools around the globe. This is of particular importance for developing countries where students and faculty usually do not have adequate access to scientific literature and strong library services. The three major misconducts in scientific research and writing are fabrication, falsification, and plagiarism [ 1 ]. Plagiarism can be compared to the proverbial hydra, as it also has many heads (types), from copying to paraphrasing, patchworking, poor or no citations/quotations, paying for getting articles written, and collusion with other students [ 5 ]. The accessible Internet has made plagiarism relatively easy [ 6 ].

There are established guidelines and codes of conduct regarding scientific misconduct and plagiarism, which have been adopted by medical journals and universities across the globe [ 7 ]. Still, many authors remain unaware and plagiarize.

The situation is no different in Pakistan. There is no defined curriculum or formal training for medical research and writing for the undergraduate medical students in the majority of medical schools in the country. Although faculty members are required to do medical writing and publish a certain number of manuscripts for promotion; they too are not formally trained to write for referenced biomedical literature. The revision of the faculty promotion rules by the Pakistan Medical and Dental Council (PMDC), globalization, the migration of physicians abroad, and international exposure has led to a paradigm shift in medical research and writing in Pakistan. These factors, combined with the current pressure to publish research, sometimes results in the author engaging in unethical practices to achieve professional goals. Even faculty members are often unclear about the implications of indulging in deliberate or unintentional plagiarism and are unable to guide their students and residents.

We have been conducting workshops on medical writing and mentoring students and peers since 2014 [ 8 ]. We observed a low level of awareness regarding plagiarism among the participants of these workshops. We conducted a cross-sectional survey to assess the knowledge and attitudes of the students and faculty towards plagiarism using the modified version of attitude towards plagiarism questionnaire (ATPQ) [ 9 ]. It revealed that the general attitudes of Pakistani medical faculty members and medical students towards plagiarism were positive. This has also been confirmed by other authors in Pakistan [ 10 ]. We aimed to address this issue by conducting a series of focused workshops on scientific misconduct and plagiarism for the students and faculty members.

The objective of our study was to assess the effectiveness of focused workshops on plagiarism as a tool for change in attitudes towards plagiarism as assessed by a modified version of ATPQ.

Ethics approval was obtained from the ethics review committee of CMH Lahore Medical College and Institute of Dentistry. We used a modified version of the ATPQ, which has been validated for use in Pakistan [ 9 ].

A total three workshops of three-hour duration each, titled “Scientific misconduct, plagiarism and ethical aspects of medical research and writing: What you need to know?”, were planned and facilitated by two authors (Rathore and Mansoor) between January and June 2016. Both are published researchers who have conducted more than 60 workshops on medical writing. The aims of the workshops were to provide an overview of the topic, describe different forms of scientific misconducts and plagiarism, relating to the local academic environment, and offer guidance on avoiding scientific misconduct and plagiarism. The workshops were facilitated in four sessions of approximately 40 minutes each. The workshop program details are described in Table ​ Table1 1 .

COPE: Committee on Publications Ethics, JCPSP: Journal of College of Physicians and Surgeons of Pakistan, JPMA: Journal of Pakistan Medical Association, ICMJE: International Committee of Medical Journal Editors

In case of medical students, they were nominated by the college administration based on their roll numbers. The participants in the workshop for the faculty were registered by the coordinator in that institute who gave an open call for participation. We did not exclude anyone who was interested in participating in these workshops. The participants were encouraged to ask questions and clarify any ambiguities during the sessions as well as at the end of each presentation. Two hands-on exercises on authorship and unethical practices were conducted and the answers were discussed and debated in group discussion.

At the start of the workshops, participants were briefed about the ATPQ and the need to objectively measure the attitudes towards plagiarism and the impact of the workshops. The questionnaire had three parts. The first part was informed consent, which assured the participants of anonymity and explained the rationale of the study. The second part was demographic data, including information if the respondent had attended a similar, focused workshop on plagiarism or had witnessed any unethical practice. The third part was the ATPQ. The ATPQ consisted of 22 questions with three options (agree, neutral, disagree). The participants had 10 minutes to complete the questionnaire, which was then collected by one of the authors. After the pre-workshop ATPQ assessment, the three-hour workshop was conducted. To evaluate the improvement in attitudes towards plagiarism, respondents were tested again using the same ATPQ after the workshop.

Data were entered in IBM SPSS Statistics for Windows, Version 20.0 (IBM Corp., Armonk, NY, US). Frequencies and descriptive statistics were run for demographics and respondents’ characteristics. An independent sample t-test was run to analyze differences in mean scores on pre-workshop ATPQ and dichotomous (Yes/No) variables.

To evaluate the improvement in attitudes towards plagiarism, a t-test for dependent samples was run to analyze differences in mean scores on ATPQ prior to and after the workshop was delivered. Prior to running this test, Kolmogorov-Smirnov and Shapiro-Wilk tests of normality were run to assess the assumption of normality for differences in ATPQ scores prior to and after the workshop.

There were 38 males and 42 female participants with a mean age of 26.28 ( + 6.7) years. The majority were final year medical students 59 (73.75%) and 21 (26.25%) were faculty members. One-third (33.8%) of the respondents had never attended workshops, seminars, or lectures related to ethics in medical research and writing before this workshop, and only 32.5% had published manuscripts in peer-reviewed medical journals. Most of the respondents (47/80) did not have a research supervisor or mentor. More than half (55%) indicated they had witnessed unethical practice or scientific misconduct among their colleagues and seniors (Table 2 ).

ATPQ: attitude towards plagiarism questionnaire

A t-test for paired samples was run to analyze the difference in mean scores on ATPQ reported prior to and after the workshop. The respondents reported a significant improvement in attitudes toward plagiarism after the delivery of workshop (mean difference = 7.18 (6.2), t = 10.32, P < .001). In addition, respondents who had a mentor/supervisor and had previous experience in medical writing had a significantly less positive attitude toward plagiarism. Those who had attended workshops and seminars related to medical writing before the present workshop did not show any significant difference in mean scores on pre-workshop ATPQ than their counterparts.

This study demonstrates that the majority of students and faculty members in Pakistan do not receive formal training in research and scientific writing misconduct, including plagiarism. This is consistent with a recent report that showed a lack of knowledge of scientific misconduct among medical students from public and private medical colleges in Karachi [ 11 ]. In addition, participants in these workshops generally lacked the skills and expertise to detect and avoid scientific misconduct or plagiarism. Many participants (55%) had witnessed unethical practices related to research and writing at their workplace. We observed that such focused workshops can enhance the understanding of students and faculty members of plagiarism and other forms of unethical practices in research and writing.

One of the major reasons for not recognizing plagiarism in Pakistan is probably related to basic education before medical college. In the traditional educational system of Pakistan, the verbatim reproduction of content from books is considered a normal practice and no referencing is required [ 12 - 13 ]. Most students in medical schools and at the post-graduate level follow the same practice until they are corrected by their teachers or are caught unaware during the submission of their assignments or research manuscripts. In 2014, Ghias et al. reported that a formal ethics curriculum in public medical schools is lacking [ 14 ]. Another important finding was that students did not refrain from engaging in scientific misconduct even when they were able to identify the academic misconduct.

Our study revealed a lack of awareness about plagiarism, with significant improvements in knowledge following a focused workshop on plagiarism. This is similar to Vuckovic et al. from Serbia who demonstrated that even a short course in science ethics can have a great impact on the attendees and enhance their knowledge of the responsible conduct of research. Such interventions can also change behaviors regarding the reluctance to react publicly and punish wrongdoers [ 15 ]. Kirsch et al. conducted a series of plagiarism awareness workshops at the University of South Carolina, USA, and concluded that a structured system of workshops about scientific misconduct should be arranged [ 16 ]. The same researchers also suggested that librarians can play a role in conducting online workshops for faculty and students located in distant campuses. In Pakistani medical schools, well-equipped medical libraries are rare and the majority of librarians are not trained to provide guidance to students and faculty regarding scientific and medical writing misconduct [ 17 - 18 ].

Another important finding in our study was the lack of knowledge of authorship criteria. None of the participants in these and previous workshops were aware or clear about the globally accepted authorship criteria of the International Committee of Medical Journal Editors( ICMJE) [ 19 ]. This is a major issue in Pakistan, with undergraduate medical students and postgraduate residents working with a senior faculty member. It is not uncommon for the faculty member or head of the department to demand first authorship without any significant contribution. We dedicated one section of the workshop on this topic and conducted one exercise to reinforce the concept. Knowledge of authorship criteria can help authors know their rights as well as contributions towards hierarchy in the authorship of a particular manuscript and avoid and contest unethical encroachments [ 20 ].

Based on the authors' personal experiences and feedback and discussion with the workshop participants, certain unethical issues were highlighted during these workshops. Most of them have not formally been documented or reported in Pakistan. It is a common misconception in Pakistan that it is ethically correct to split a single dissertation or thesis into multiple studies in order to increase the number of publications. We noticed that the participants were not aware or clear about “salami” publications and how data slicing constitutes an ethical concern [ 21 ]. We clearly explained this concept, elaborated it with examples, and discussed the drawbacks of creating multiple manuscripts from a single research project. We have also noticed that the major issue in Pakistan is lack of training. This has been demonstrated in a previous survey in three medical institutes in Karachi, Pakistan, which found that the major cause of plagiarism was lack of training in research methodology and referencing techniques rather than malicious intent in most the cases [ 10 ].

We recommend the following measures to combat the rising menace of scientific misconduct and plagiarism in Pakistan and other countries:

a) All stakeholders of under and postgraduate medical education in Pakistan, including PMDC, College of Physicians and Surgeons of Pakistan (CPSP), and Higher Education Commission (HEC), along with the Ministry of Health and Education, should devise national guidelines regarding scientific misconduct and plagiarism. These should be uniformly implemented all across the country.

b) A formal training program and a national curriculum on scientific misconduct and research ethics for undergraduate and postgraduate studies must be developed. This should be a combination of lectures, seminars, and training workshops.

c) There is a need to scrutinize and train supervisors, as they are directly responsible for the training of future generations of researchers, trainers, and teachers. Many supervisors in Pakistan do not provide formal guidance to their students in research and writing and only sign the first page of a thesis without even reading the whole text.

d) There should be zero tolerance towards all forms of scientific misconduct and plagiarism with a penalty imposed on the offenders even if they are senior faculty members [ 22 ].

e) The use of plagiarism-detecting software must be encouraged in all teaching institutions [ 10 ], as it is an effective tool to detect plagiarism [ 23 ].

f) Researchers must plan everything, including writing the manuscript in advance in order to avoid a last-minute rush to meet tight deadlines. A lack of planning leads to a last-minute panic, which can result in adopting unethical shortcuts.

g) Concerns have been raised about the institutional review boards/ethics review committee in Pakistan as being "rubber-stamping committees" [ 24 ]. A majority of the individuals working in these committees do not have the required training or cannot spare adequate time for the job. There is an urgent need to train individuals and to strengthen the institutional review boards/ethics review committee in every medical institute of the country.

h) There is a need to address the current culture of “publish and perish” in Pakistan, which has led to a rat race of publishing low-quality manuscripts just for the sake of promotions. The focus must change from quantity to quality [ 25 ].

i) A central registry of researchers and authors should be created similar to the AuthorAID mentor program [ 26 ]. This will allow potential supervisors to offer their services and make it possible for young researchers to identify a suitable mentor for their research journey.

j) Whistle-blowing is now considered an ethical activity in the developed world, as it has the potential to identify wrongdoings in the healthcare sector [ 27 - 28 ]. It is time that Pakistani academia also adopts this global norm and starts recognizing the value of whistle-blowers, as they can help highlight unethical research and scientific misconduct, which otherwise goes unnoticed. They should be provided legal cover and guarded against exploitation when they expose a wrongdoing.

Limitations

This study has some limitations that warrant mention. The sample size was limited to 80 participants, which is very small considering that, currently, there are more than 140 medical and dental colleges in Pakistan with thousands of students and faculty members. For a majority of the students, this was the first workshop on scientific misconduct and some of them complained of information overload in a single workshop, as many concepts were new for them and difficult to understand in a single sitting. We did not document the long-term outcomes of this training on the research output of the participants and if it actually resulted in a sustained change in attitude towards plagiarism.

The attitude towards plagiarism and the knowledge of scientific misconduct was poor among our workshop participants. This is likely due to low awareness and the absence of scientific misconduct in curriculum both at the under- and postgraduate levels. Having a research mentor and prior experience in medical writing is associated with a better awareness of plagiarism. Awareness of plagiarism and scientific misconduct can be significantly increased with focused workshops on plagiarism.

Acknowledgments

We are thankful to Colleen O'Connell, MD FRCPC, Asst. Prof. Dalhousie University Faculty of Medicine, for her review of the manuscript for language, grammar, and syntax. We are also thankful to Dr. Ahmed Waqas, CMH Lahore Medical College, for help with the statistical analysis.

It has a three-point Likert scale response pattern: agree (coded as 3), neutral (coded as 2), and disagree (coded as 1). The total score is the sum of all the 22 items. There is no negative scoring. Increasing scores reveal a higher tendency toward plagiarism.

The content published in Cureus is the result of clinical experience and/or research by independent individuals or organizations. Cureus is not responsible for the scientific accuracy or reliability of data or conclusions published herein. All content published within Cureus is intended only for educational, research and reference purposes. Additionally, articles published within Cureus should not be deemed a suitable substitute for the advice of a qualified health care professional. Do not disregard or avoid professional medical advice due to content published within Cureus.

The authors have declared that no competing interests exist.

Human Ethics

Consent was obtained by all participants in this study. CMH Lahore Medical College and Institute of Dentistry , IRB issued approval NA

Animal Ethics

Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue.

Superconductivity physicist 'engaged in research misconduct,' UR says

thesis on scientific misconduct

University of Rochester physicist Ranga Dias, who made international headlines in the scientific community several years ago with purported breakthroughs related to room-temperature superconductivity, "engaged in research misconduct," according to an internal university review.

It is only the latest blow for Dias. Major research journals have retracted his publications, concluding he misrepresented evidence and kept his research partners in the dark about key findings.

In an email, Dias said the problem lay with "an individual's opinion and apparent errors in understanding the data." He accused UR of predetermining the investigation.

"It is disheartening to see that my students have become victims of a premeditated and predetermined outcome by the University of Rochester," he wrote. "And these outcomes are all too often dictated by the Board of Trustees these days, a practice that is affecting students and faculty alike."

Earlier internal UR reviews cleared Dias of wrongdoing. The university did not release the full findings of those investigations or this most recent one but instead issued a summary statement.

"The University’s investigation by external experts identified data reliability concerns in those papers that confirm the appropriateness of those retractions," the statement reads in part, referring to retractions by the journals Nature, Physical Review Letters, and Chemical Communications. "The committee concluded, in accordance with University policy and federal regulations, that Dias engaged in research misconduct."

University of Rochester physicist Ranga Dias research

Dias' research had to do with superconductivity. Very high pressure and extremely cold temperatures can distort atoms' electron orbits, allowing for electricity to be conducted without energy loss. That would open the door for major advances in computing speed and many other fields, but the need for very low temperatures — about minus 200 Fahrenheit — limits its real world application.

In a pair of papers published in Nature starting in 2020, Dias claimed to have achieved superconductivity without extreme high pressure or cold temperatures. But other physicists soon noticed suspicious patterns in the data, leading to further peer review and eventually retractions.

His graduate students at UR, too, raised concerns about the research process. They said he didn't share some key details with them and didn't allow them time to review the papers before submitting them to journals.

More by Justin Murphy: Exclusive interview: Superintendent Carmine Peluso calls leaving RCSD 'my next, best move'

For its part, UR is taking several steps as a result of the controversy, spokeswoman Sara Miller said:

  • Create a full-time research integrity officer position "to offer more frequent and focused training, and enhance our communications and responses to concerns from the University community."
  • Review and update its research misconduct policy.
  • "Increase and improve" communication to students about reporting research misconduct.
  • "Clarify our expectations of research mentors to address concerns raised by this case."

— Justin Murphy is  a veteran reporter  at the Democrat and Chronicle and author of " Your Children Are Very Greatly in Danger: School Segregation in Rochester, New York."  Follow him on Twitter at  twitter.com/CitizenMurphy  or contact him at  [email protected] .

IMAGES

  1. Scientific Misconduct Essay Example

    thesis on scientific misconduct

  2. (PDF) When conflict–of–interest is a factor in scientific misconduct

    thesis on scientific misconduct

  3. Research Misconduct PowerPoint Template

    thesis on scientific misconduct

  4. (PDF) Scientific Misconduct

    thesis on scientific misconduct

  5. (PDF) Scientific Misconduct in Psychology: A Systematic Review of

    thesis on scientific misconduct

  6. Academic Misconduct Policy and Process

    thesis on scientific misconduct

VIDEO

  1. Shylily's Scientific Thesis On Nikke

  2. Publication ethics and scientific misconduct

  3. Research and Publication Ethics (RPE)| Ph.D.| Research Integrity| Scientific Misconduct| Ethics

  4. Scientific misconduct / Fraud

  5. Scientific Misconduct

  6. Skills used to pass the dissertation defense (مهارات اجتياز مناقشة الرسالة العلمية)

COMMENTS

  1. A review of the current concerns about misconduct in medical sciences publications and the consequences

    Existence of scientific misconduct in the field of medical sciences has shown to inflict very severe blows to the individual and public health; highlighting the immediate need of solution finding. Thus, to help out meeting this challenge, this article begins with giving the definition and more recent types of research misconduct at the current ...

  2. PDF Support for those affected by scientific misconduct is crucial

    This includes the sup-port of those who are affected by scientific mis-conduct. Such support is essential not only for them, but also for signalling to everybody else who lives in the aftermath of ...

  3. Scientific misconduct in psychology: A systematic review of prevalence

    Cases of scientific misconduct undermine the credibility of published results and ultimately reduce the confidence in the value of scientific research as a whole (Fang, Steen, & Casadevall, 2012).The detection of some spectacular cases of scientific misconduct (e.g., the case of Diederik Stapel; Callaway, 2011) has contributed to concerns over the validity of published results in psychology ...

  4. (PDF) Research Integrity & Ethics Scientific Misconduct

    Scientific misconduct encompasses various unethical practices, including fabrication, falsification, plagiarism, and other forms of research misconduct. This article explores the concept of ...

  5. Scientific Misconduct: A Global Concern

    What is Scientific Misconduct? "Research misconduct is defined as fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results" [].Another definition of research misconduct is given as, "as any behavior by a researcher, whether intentional or not, that fails to scrupulously respect high scientific and ethical standards."

  6. Investigating and preventing scientific misconduct using Benford's Law

    Accounts of scientific misconduct can draw widespread attention. Archetypal cases include the study produced by Wakefield et al. [] linking autism to the vaccine against measles, mumps and rubella, and the decade-long misconduct perpetrated by Diederik Stapel [2, 3].The problem, however, is far more widespread than often recognised.

  7. Rooting out scientific misconduct

    Rooting out scientific misconduct. Ivan Oransky and Barbara Redman Authors Info & Affiliations. Science. 11 Jan 2024. Vol 383, Issue 6679. p. 131. DOI: 10.1126/science.adn9352. eLetters (1) Scientific misconduct is an issue rife with controversy, from its forms and definitions to the policies that guide how allegations are handled.

  8. RESEARCH MISCONDUCT

    A statement developed by the U.S. Office of Science and Technology Policy, which has been adopted by most research-funding agencies, defines misconduct as "fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results.". According to the statement, the three elements of ...

  9. Scientific Misconduct: Falsification, Fabrication, and Misappropriation

    For research to proceed efficiently, two aspects of scientific integrity need to be fostered. Firstly, there is the integrity of the scientific literature, which can accumulate errors due to inadvertent mistakes as well as due to deliberate falsification or fabrication of data, i.e., research misconduct. Secondly, there is the integrity of the ...

  10. Harassment as Scientific Misconduct

    The definition of research or scientific misconduct has changed in the last three decades, and there is precedence for the consideration of the harmful effects of sexual harassment on scientific integrity. We provide recommendations for moving forward to reduce the harmful impact of harassment, bullying, and discrimination on the STEMM ...

  11. Controversial Physicist Faces Mounting Accusations of Scientific Misconduct

    Scrolling through figures from Dias's thesis, and comparing them with figures in recent papers by Dias, Hamlin noticed that a plot of the electrical resistance for the material germanium ...

  12. Plagiarism allegations pursue physicist behind stunning

    Many physicists regarded the claim warily because 6 months earlier, Nature had retracted a separate room-temperature superconductivity claim from Dias's group, amid allegations of data manipulation. Now come accusations that Dias plagiarized much of his Ph.D. thesis, completed in 2013 at Washington State University (WSU).

  13. 4 Context and Definitions

    4Context and Definitions. In the end, a commitment to the ethical standard of truthfulness, through an understanding of its meaning to science, is essential to enhance objectivity and diminish bias. Unfortunately, the ethos of concern for scientific misconduct continues to dominate the research-ethics movement.

  14. Allegations of Scientific Misconduct Mount as Physicist Makes His

    Allegations of Scientific Misconduct Mount as Physicist Makes His Biggest Claim Yet. March 9, 2023 • Physics 16, 40. Condensed-matter physicist Ranga Dias and his colleagues reported on Tuesday the discovery of a room-temperature, near-ambient-pressure superconductor; Dias is also being accused of committing scientific misconduct, including ...

  15. Historicizing the crisis of scientific misconduct in Indian science

    The colonial era. Despite much contrary historical evidence, scientific misconduct is often discussed as a recent phenomenon, reflecting contemporary concerns with governing scientific output in contexts that prioritize scientometric measurement (citation scores, patents, and the like) and garnering financial support as indicators of success. 8 As this special issue shows, a longer-term view ...

  16. (PDF) Scientific Misconduct

    The literature search was performed in the Google and PubMed using 'scientific misconduct', 'honorary/ghost authorship', ... It details how a thesis differs from a ...

  17. Scientific Misconduct: A Global Concern

    "Research misconduct is defined as fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results" [].Another definition of research misconduct is given as, "as any behavior by a researcher, whether intentional or not, that fails to scrupulously respect high scientific and ethical standards."

  18. In Fraud We Trust: Top 5 Cases of Misconduct in University Research

    The fraud came in waves. First, Dr. Potti flagrantly lied about being a Rhodes Scholar to attain hundreds of thousands of dollars in grant money from the American Cancer Society. Then, Dr. Potti was caught outright falsifying data in his research, after he discovered one of his theories for personalized cancer treatment was disproven.

  19. What is Scientific Misconduct?

    June 5, 2020. Scientific misconduct can be described as a deviation from the accepted standards of scientific research, study and publication ethics. There can be many forms of scientific misconduct such as plagiarism, misconduct involving experimental techniques, and fraud. Another example may be when the results of a scientific investigation ...

  20. Scientists Investigating Alzheimer's Drug Faulted in Leaked Report

    Oct. 14, 2023. A neuroscientist whose studies undergird an experimental Alzheimer's drug was "reckless" in his failure to keep or provide original data, an offense that "amounts to ...

  21. List of scientific misconduct incidents

    Scientific misconduct is the violation of the standard codes of scholarly conduct and ethical behavior in the publication of professional scientific research. ... A 2023 report in Science noted that at least 21% of Dias's 2013 doctoral thesis had been copied from uncredited sources. As of 2024, Dias has had four of his research papers retracted ...

  22. More Studies by Columbia Cancer Researchers Are Retracted

    March 20, 2024. Scientists in a prominent cancer lab at Columbia University have now had four studies retracted and a stern note added to a fifth accusing it of "severe abuse of the scientific ...

  23. Historicizing the crisis of scientific misconduct in Indian science

    A flurry of discussions has ensued in India over practices of plagiarism and predatory publications during the last decade, bringing the issue of scientific misconduct there to the fore. 1 In July 2011 the Institute of Mathematical Sciences, Chennai, invited distinguished scientists and administrators to a workshop on "Academic Ethics . . . to discuss various forms of academic misconduct ...

  24. Combating Scientific Misconduct: The Role of Focused Workshops in

    Scientific misconduct and plagiarism are global issues, plaguing not only developing countries but also technologically advanced nations, ... It is a common misconception in Pakistan that it is ethically correct to split a single dissertation or thesis into multiple studies in order to increase the number of publications.

  25. Ranga Dias 'engaged in research misconduct,' UR says

    0:51. University of Rochester physicist Ranga Dias, who made international headlines in the scientific community several years ago with purported breakthroughs related to room-temperature ...

  26. Harvard Probe Finds Honesty Researcher Engaged in Scientific Misconduct

    Photo: Mel Musto/Bloomberg News. A Harvard University probe into prominent researcher Francesca Gino found that her work contained manipulated data and recommended that she be fired, according to ...