For this reason, the existence of fraud within the scientific craft has been largely ignored, partly owing to its assumed non-existence, but also because it was traditionally maintained that the normative structure of science possessed inherent mechanisms to prevent deviant acts in scientific inquiry. The present essay calls this assumption into question and investigates the extent to which the structure of science averts scientific misconduct.
To achieve this goal, the study starts by defining scientific fraud and then scrutinizes the idealistic conceptualizations of the normative structure of science in order to determine whether these are presently applicable or not. Finally, the paper addresses several potential motivational factors leading scientists to commit fraud and demonstrates that certain aspects of the scientific structure rather than the individual make such acts possible or even likely.
Although a precise definition is lacking, by scientific fraud we understand an act of deception whereby one’s work or the work of others is consciously and intentionally misrepresented. It belongs to the wider category of scientific misconduct, defined as deviation from accepted ethical practices for proposing, conducting, and reporting research. 1 Scientific fraud may take numerous forms, the most common of which are falsification of data, such as outright fabrication of data, deceptive selection and reporting of findings, and omission of conflicting information.
Moreover, scientific fraud is a label for improprieties of authorship, which includes plagiarism and other improper assignment of credit such as excluding others or claiming the work of someone else as one’s own. Additionally, under the term scientific fraud are classified acts of misappropriation of others’ ideas, for instance through improper use of information or influence gained by privileged access, such as service on peer review panels, editorial boards, and policy boards of research funding organizations.
Finally, it is necessary to distinguish fraud from honest error and from ambiguities of interpretation that are considered inevitable in the scientific process. 3 For many years, scientific fraud as defined above was not perceived as an issue of concern, given that the normative structure of science would make such acts unlikely. This view was most clearly articulated by Robert K. Merton4, who understood the institutional goal of science as being the “extension of certified knowledge” and outlined four norms that he saw central to this pursuit.
Universalism, as he maintained, implies that the validity and truth of scientific statements be totally separated from the personal characteristics of the one who initiates them. Communality entails that scientific findings should be freely shared with others, whilst secret or classified research is antithetical to the spirit of science. Disinterestedness refers to the fact that the scientist’s research be guided not by personal motives (e. g. rofit), but by the wish to extend scientific knowledge. Finally, organized skepticism means that scientists should be encouraged to examine openly, honestly, and publicly each other’s work and provide constructive criticism. Merton believed that conforming to these norms generates two types of control mechanisms that discourage fraud in science. 5 According to him, the first is an “inner mechanism” secured by the scientist through the internalization of norms and other processes of socialization.
In the long process of training, the scientist assimilates the guidelines and methods of scientific inquiry and learns that fraud represents the most serious crime in the search for scientific certainties. The second form of control in science is an “external mechanism”, which follows from the norms of communality and organized skepticism and implies that if a scientific result were important enough, other scientists would try to repeat it.
Replication of the experiments would facilitate exposure of cheating and encourage honesty. For this reason, science has commonly been perceived as self-policing and self-correcting and fraud extremely rare if not completely absent. 6 Therefore, “in the past scientists widely believed that science possessed sufficient internal checks to effectively deter fraud, to discover dishonesty so quickly and efficiently that the resulting damage to a scientist’s professional career would be too great to risk. 7 Nevertheless, an overview of the present scientific reality should not only confirm that social control mechanisms in science are much weaker than usually accepted, but also that there are structural as well as personal incentives conducive to scientific fraud, which place doubt on the traditional assertions. To begin with, the four aforementioned Mertonian principles can be criticized for failing to accurately reflect the present state of affairs in the field of science.
For instance, the norm of universalism hardly appears to be satisfied nowadays, given the fact that the truth of a scientific statement is not always separated from the personal characteristics of the individual scientist. Instead, “truth” in science is often negotiated and clearly depends on the prestige of the scientist who produces the statement. The acceptance or rejection of claims in science is often determined by the source and their fit to prevailing beliefs and knowledge.
It is usually unlikely that the scientific community will pay as much attention to a young inexperienced scholar than to an established scientific VIP, especially if the claim in discussion contradicts or rejects generally accepted scientific principles. In short, reputation and the appeal of a theory often give “immunity from scrutiny”. 8 Belief in the accepted scientific paradigms on which many spend their entire life working and building their careers can be blamed for inducing a natural propensity to support new findings and theories that agree with established scientific world-views, even if they are deceptive.
Science is not free from internal9 or external (cultural and political)10 influences and ample examples exist of cases where deviance in science can be traced to beliefs that have “blinded” those responsible for upholding the truth. Illustrative of the introduction of “science” into a political cultural matrix that wants to believe that this “science” is valid and true is the case of Troffim D. Lysenko, a former peasant and plant-breeder who publicly endorsed the debunked Larmakian theory, i. e. that the mechanism by which evolution works is the inheritance of traits acquired in response to one’s environment.
Lysenko’s view accorded nicely with the political view of Stalin’s communist regime of “social engineering” and was proclaimed a dogma by the Kremlin, ensuring decisive victory of Lysenko over every opponent, who was systematically eliminated from any post. 12 Also highly relevant in this context is the case of Cyril Burt. This well respected scientist died (in 1971) before his studies – claiming that there is a clear correlation between occupational status of parents and the I. Q. of the children13 – were investigated and proven beyond any reasonable doubt (in 1978) to be fraudulent.
Asimov14 notes that Burt had probably fabricated data so that they would fit his theories, and Wade15 adds that “Burt’s data remained unchallenged because they confirmed what everyone wanted to believe”. Moreover, contrary to Merton’s second norm, almost daily scientists fail to share their findings with each other for a variety of reasons, such as secret or classified research, fierce competition, and the fear of having one’s ideas stolen. This explains, for instance, that despite the norm of communality, accessibility to original data belonging to other scientists is extremely problematic.
In these circumstances, replication is not always possible since in many cases not enough information is available about the original work, making detection of fraud difficult. Furthermore, it appears that disinterestedness in science can hardly be maintained. After all, a scientist derives two sometimes-related types of personal rewards from his work, a psychological and an economical one. 16 The former refers to such rewards as prestige, fame, recognition, self-esteem, international reputation, respect and the bestowal of various honors.
Especially in the latter half of the twentieth century, scientists are often consulted by high government officials and participate in top-level policy discussions on national, international, social, medical, and political issues. Additionally, the highest award any scientist can attain, the Nobel Prize, is very personalized since it is awarded personally to the chosen scientist. For instance, in 1923 the Nobel Prize in physiology was awarded to Macleod and Banting for discovering how to isolate insulin and according to Bliss17, it appears that Macleod (and others) twisted the truth about who should have gained the honor.
Certainly, Macleod did not show too much disinterest in this case. As far as the personal-economic compensation goes, Ben-Yehuda argues that “recognized, well-established scientists usually enjoy economic security and a moderate to luxurious life-style. Many travel throughout the world very comfortably and have access to research grants, which might further bolster their already comfortable salaries”. 18 Consequently, scientists have in fact profound interests in being successful and disinterestedness does not appear to be a realistic norm.
Finally, and arguably most important, the norm of organized skepticism appears to be the most troublesome among Merton’s principles and suggests that the external control mechanism may be far weaker than conventionally assumed. Several important problems exist with the replication argument. First, replication is in general not considered very prestigious or interesting work for scientists who have been trained to innovate. 19 In addition, granting institutes and scientists are reluctant to provide funds, time, and effort for replication alone, unless the replication itself is significant in some important way.
Moreover, the idea of replication is most applicable to experimental disciplines which lend themselves to empirical experiments and reproductions. Replications are rather unlikely regarding ideas and theories that dominate the humanities and social sciences. In addition, the use of methodologies and techniques such as surveys, participant observations, and ethnographies virtually guarantee that results cannot be replicated. Not to mention that replication can never really prove that the experiment is deceptive, but only that its results are wrong. Additionally, to conclude that fraud is involved, one needs serious premises for support.
Finally, sometimes replication may be obstructed by financial, structural, or technological limitations. For example, the advent of X-rays and other sophisticated technical equipment made it possible only forty years after its discovery to declare the Piltdown Man a hoax. 22 Consequently, the fact that replication is not a popular or straightforward endeavor means that the probability of fraud detection is not particularly high. The complicated and elaborate division of labor in science, leading to increased specialization, makes the detection of fraud even more remote.
Additionally, the low possibilities of exposing fraud are coupled with generally low sanctions once a fraudulent scientist is indeed discovered. Ben-Yehuda23 argues that “fraudulent scientists are not submitted to anything like the criminal justice system, and a formal trial does not follow investigations. ” In many cases, the deviant act of the scientist is not even made public. The superiors of deviant scientists are never over-anxious to disclose the case, for it might damage their own, and the institute’s prestige and credibility.
Thus, if the case is not blatant fraud and fabrication on a mass scale, the most that the deviant scientist can incur is unemployment. As a result, it appears that the combination of low probability of detection and non-harsh punishments provides a fertile context for misconduct. One can thus observe that there are several elements and forces inherent in the normative structure of science that are very much conducive to scientists committing fraudulent acts. Nevertheless, this is certainly not sufficient to explain the motivational element for engaging in scientific misconduct.
This topic has been rather controversial in the past, with “current popular efforts tending to be either individualistic ‘bad apple’ explanations, or indictments of the pressures to produce inherent in the structure of modern science”. 24 The explanation in favor of individualistic interpretations is for instance reflected in the statement of Philip Handler25, former president of the National Academy of Science, who argued that the issue of scientific fraud “need not be a matter of general scientific concern” as it was primarily the result of individuals who were “temporarily deranged”.
Such attempts to explain scientific deviance by attributing it to character or personality flaws of the perpetrators are excellent examples of the “bad person” approach to explaining deviant behavior. “Being the least threatening to the status quo, this approach has the added advantage of deflecting attention and potential criticism away from the institutional arrangements in which the deviance occurs – in this case the structure of modern scientific research. “26 Nevertheless, criticism has been forthcoming, much of it from within the scientific community.
At least since the second half of the 1970s, the notion crystallized that mobility in most Western scientific establishments depended primarily on published works. This put heavy pressure on young scientists, as the slogan “publish or perish” became very meaningful. Those who could write fast and had good contracts with editors and publishers moved forward; the others agonized. “When the idea of producing much and quickly is being impressed upon a young scientist who very badly wants an academic career, a temptation to deviate is likely to be created. 27 Ebert28, former Dean of the Harvard Medical School, notes that science has “inadvertently fostered a spirit of intense, often fierce competition” and that the pressure to produce and publish is probably THE reason for fraud in science. Promotion in the most prestigious medical schools is based almost entirely on the evaluation of published research. Moreover, power and prestige are now dependent on the ability to generate external funding with younger researchers being pressured to turn out papers reporting positive results as fast as possible.
In many cases, getting grant money makes the difference between doing research and being out of job. Consequently, what had been a purely intellectual competition in the past has now become an intense struggle for scarce resources based on published work. 29 What this entails is that fraudulent scientists are certainly not born to be as such; rather, they become fraudulent. “A lonely scholar, hungry for publications and recognition, in a stiff competition and in a desperate subjective need for tenure, may easily become cynical of science. “30