Quite a few years ago I did a doctoral degree in molecular biology in a laboratory that was focused on embryonic development. It’s one of the intriguing riddles of life: how, from a single cell – the fertilized egg – a whole being is created, a human being, for example, with two hands, two eyes, a nose in the right place, a bladder. How does it come about that the egg divides and that each cell knows what it’s supposed to be?
In doing research on a subject like this, a system model is used – ours was a chicken. Yes, a chicken, because in contrast to mice and humans, the embryo of a chicken develops outside the mother’s body, in an egg that can be incubated and monitored. Because the key players in the process are the genes, the dream of researchers in this field is to create transgenic chickens – each of whose cells contains a gene that is introduced into them artificially from outside and can be tracked during embryonic development. Transgenic mice, fish and flies are already routine in scientific research, but no one had succeeded in carrying out the process with chickens. Until someone did.
One day we learned that a doctoral student from the Hebrew University of Jerusalem had found the holy grail. The idea was to “hitch a ride” on the spermatozoon (sperm cell) of the chicken, which is, if you think about it, a natural carrier of genes. If a foreign gene were introduced into it, the spermatozoon would carry it to the egg, too, resulting in an embryo containing the new gene. The secret here lay in a chemical solution the student developed, which could perforate the spermatozoa delicately so they would be able to absorb the foreign gene. To prove that the system worked, the student used a “model gene,” whose product is a protein that renders a certain substance blue. The fact that embryonic cells turned blue proved that they had accepted the foreign cell.
I was occupied with my research, and it was an undergraduate student, Gil Levkowitz, who undertook to learn the method and make it available in our laboratory. Levkowitz had arrived within the framework of a summer project in which students acquired experience working in a lab before proceeding to a master’s degree. Levkowitz, who was studying at Tel Aviv University, was already then an ambitious young man – today he’s a professor in the department of molecular cell biology at Rehovot’s Weizmann Institute of Science – so he was delighted to be offered the project.
“It was supposed to be simple,” he recalls today. “Like cooking according to a recipe, with ingredients prepared in advance. Take the solution, the chicken’s spermatozoa and the gene you want to introduce, mix them in a test tube and put it on ice for a few minutes. Not rocket science. Afterward a poultry worker would inseminate the chickens, they would lay eggs and I would collect them.” And inside the eggs – shh, don’t wake it up! – sleeps a tiny embryo, each of whose cells contains the new, “blue” gene.
“I went to Jerusalem and worked with the doctoral student,” Levkowitz continues. “Everything went well, but when I got back to [my lab in] Tel Aviv it didn’t work [no cells turned blue]. I tried again, and again it didn’t work. I was a novice, so it was clear that I was doing something wrong. I went back to Jerusalem to improve my method, I sat with the doctoral student and together we went over all the stages. It worked. I got back to Tel Aviv and it stopped working. But sometimes it did actually work. We checked everything – for example whether there was a difference in the acidity of the water we’d used in Jerusalem as compared with the water in Tel Aviv. Biological experiments can succeed and fail over very minor issues. But no matter what we did, sometimes it worked and sometimes it didn’t. It was so mystifying!”
In the meantime, the summer ended and Levkowitz left the lab. One day the phone rang. It was the head of the lab in Jerusalem, who told us that the whole thing had been a fake. There was never any method to introduce genes, the doctoral student had deceived everyone. He would come to the lab at night – the blue color was supposed to develop in the test tube overnight – and dye the experimental cells. He simply dripped the blue material on them, so that in the morning, when the results were examined, the cells in the experimental test tube were found to be blue, as though they contained the foreign gene, whereas the control cells remained transparent.
- Pfizer vaccine is just as effective against COVID U.K. strain, Israeli data shows
- As Israel passes 5,000 COVID deaths, strains and anti-vaxxers lurk around the corner
We were flabbergasted. Suddenly everything made sense. The explanation for the inconsistent results became clear: When the doctoral student was around, the experiment worked; when he wasn’t present, it didn’t. That’s why it always worked in Jerusalem but in Tel Aviv it usually didn’t – except when the student came to Tel Aviv to help out with problems that arose.
The world of science, which we had thought was lofty and pure and safe from treachery, had been shattered. The process of sobering up was as frightening as it was brutal for us. How naive we had been – and maybe still are? The cheater had been caught, but perhaps – at least that’s how it seemed at that moment – he might just as easily not have been detected. Everything suddenly looked so fragile. What did we really know?
That episode recently came to mind, possibly because of the fragility and uncertainty characterizing the period of the coronavirus that we’re living through. Perhaps because of the growing skepticism about the impeccability and validity of the world of science in the context of the pandemic and about the vaccines being developed to fight it. Science is one of the only realms whose sole purpose and justification for existence is the search for truth. Yes, truth: the laws and rules that underlie nature – taking into account, of course, the limitations of our human ability to apprehend them.
But it turns out that, as in every realm of human endeavor, in science too there are people who falsify, manipulate and deceive. Maybe not as many as in other spheres, but they exist, and if it’s truth we’re after, then the time has come to clarify the situation. After all, this is the essence of science: to study the world and expose it as it is, whether we like what we find or we don’t.
The question of how widespread the phenomenon of fraud in science is depends first of all on the definition of “fraud.” The conventional definition refers to an act that is deliberately performed with the aim of deceiving someone else. Conscious intention is what differentiates the fraudulent act from error or carelessness, which occur unintentionally. The U.S. National Academy of Sciences cites three types of academic fraud: plagiarism (copying an idea, data or entire article without crediting the source); falsification (manipulating data with the aim of achieving the desired result or some sort of interesting outcome); and fabrication. The last category, the subject of this article, involves inventing data, faking results – producing genuine fiction.
It’s hard to estimate the frequency of this phenomenon, not least because it’s not always possible to prove with certainty the perpetration of fraud, as opposed to plain sloppiness or chance error, but mainly because such an assessment relies on the number of reports of fraud, and reports are the coins that are found under the proverbial streetlight. Where there are no streetlights, nothing is found. So, when it turns out that the largest number of cases of fraud are found in the United States – and at prestigious universities, to boot – the question arises of whether that means the largest number of frauds occur there, or whether it’s precisely in those places where there is a greater commitment to truth and integrity, hence a greater effort is made to expose misconduct.
On the face of it, fraud (as well as innocent mistakes and carelessness) in the realm of science should be discovered, even before publication, by means of the well-established process known as “peer review.” Ahead of publication, every scholarly article is sent to scientists who were not involved in the study in question, for them to submit it to a trenchant and critical review. That this is not a foolproof system is apparent from the fact that falsified articles nevertheless get published. Accordingly, in recent years, a website, pubpeer.com, was created to offer updated peer reviews about articles that have already been published. The site’s initial purpose was to serve as a platform for discussion, but it has become a place for reporting anomalies in research results, critiquing data analysis and raising suspicions of fraud.
However, most scientific fraud is discovered by other means. In many cases, the whistleblower works in the lab in which the study was conducted. An example is the case of evolutionary biologist Marc Hauser of Harvard University, a noted expert in the field of cognition. Hauser was engaged in fascinating research, examining to what extent cognitive abilities that are considered human – such as language acquisition – are actually unique to humanity, or whether they are found in other species as well. One indicator that attests to the potential for language acquisition is the ability to distinguish between sound patterns (such as between “da-da” and “da-di”), an ability that exists in apes and in humans from infancy. Hauser was determined to prove that it also exists in less well-developed primates. To that end, he used an accepted experimental method by which subjects listen to a repeated pattern of sounds, which is suddenly changed. Infants and apes typically stop and look in surprise at the loudspeaker from which the sounds are emanating – an indication that they have recognized the change.
Hauser was the only scientist in the world who worked with tamarin monkeys. To determine whether they also behaved in the same way, he played them sounds and recorded their reactions on video. Hauser and another experimenter watched the footage and coded the monkeys’ behavior. A research assistant in the lab who was supposed to analyze the results was surprised to discover an immense disparity between the two sets of coding. The report of the other experimenter showed disappointing results: The monkeys didn’t tend to look in the direction of the loudspeaker at all when the sounds changed. Hauser’s report, by contrast, presented a dizzying success: The monkeys responded at exactly the appropriate times – i.e., they identified the changes in sound patterns.
The research assistant was uneasy. How could two people who viewed the same video come up with opposite results? He suggested that a third person should watch the video and code the monkeys’ behavior, but Hauser refused, offering a vague explanation and also expressing anger. The assistant decided to examine the matter for himself. He recruited a doctoral student from the lab, and without informing Hauser, the two watched the footage, separately, and coded the behavior they observed. Their results were identical: The tamarins did not react to the changes in the sounds.
When the researcher and doctoral student shared their findings with other lab colleagues, it turned out that the latter also had misgivings about Hauser’s work. The two then went to the university authorities. While Hauser was on a visit abroad, officials arrived in his lab and confiscated computers, video footage, drafts of articles and notes. In 2010, following a three-year investigation, Hauser was found guilty of eight instances of scientific misconduct, involving fabricating and falsifying data. (Ironically, one of his articles, which I came across while researching this piece for Haaretz, dealt with the question of whether primates are capable of perpetrating deliberate deception.) The scientific world was utterly shocked: Until then Hauser had been a superstar.
One of the most pernicious deceptions on record in the world of science was uncovered by a journalist. In 1988, Andrew Wakefield, a British physician and researcher, published an article in The Lancet, a prestigious medical journal, indicating that a risk of autism developed among infants who had received the MMR triple vaccine, against measles, mumps and rubella. A tremendous furor erupted over his findings, and it wasn’t until years later that the fraud was discovered. In 2004, Brian Deer, a reporter with The Sunday Times, revealed that two years before publishing his article, Wakefield had received money from a lawyer who was about to sue the manufacturers of the vaccines and had also registered a patent for an alternative vaccine.
Deer went on to compare the data of the autistic children published in The Lancet with accounts from their parents, and discovered that Wakefield had played around with the data. He altered descriptions of symptoms that were documented in the children’s medical files so that they appeared to be symptoms characteristic of autism, and he also shifted dates. Thus, in certain cases, symptoms that had been observed before the MMR vaccine was administered were delayed – i.e., said to have manifested afterward. In other instances, symptoms that appeared long after the vaccination were “moved up” to make it appear as though they were closely connected to the date of the shot and would thus seem more incriminating.
'The more intense the pressure of the milieu, the more acts of fraud there will be. You will find the greatest number of them at the most prestigious institutions, in the most intensely developing fields and in the most highly regarded journals.'
The greatest exposer of misconduct in science, however, is not necessarily a suspicious lab worker, an ambitious journalist or a determined rival. The most effective filter of science is built into its various disciplines as a basic principle: reproducibility. That is, the ability to replicate an experiment or observation repeatedly, using the same methods and obtaining the same results. (It’s important to note that such a test is not relevant in the social sciences, where it’s difficult if not impossible to achieve the same results.)
That was how the deception perpetrated by German-American physicist Jan Hendrik Schön was discovered. Schön caused an uproar in the world of nanotechnology when he claimed, in 2001, to have created tiny transistors from organic molecules. Fabrication and manipulation of data was also what undid the study published by Japanese stem-cell biologist Haruko Obokata in 2014. She presented a tempting way “to reverse the arrow of time” with mature cells and to restore them to a quasi-embryonic state, in a way that enabled them to develop into whatever type of cell was desired.
In fact, that was also how the fraud I witnessed as a doctoral student was uncovered. Our laboratory found it difficult to replicate the method for producing transgenic chickens that had been developed by the doctoral student from Jerusalem.
“The results achieved by the student were wonderful,” says the person who served as the adviser of the student, who wishes to remain anonymous. “He pushed me hard to publish them in a serious journal, but I insisted on additional corroboration. I sent him to the lab of a colleague in the United States who was working on similar research. At first marvelous reports arrived; Everything worked wonderfully. Then there was a break of a few weeks – the student returned to Israel – and then I got an email saying that nothing had worked and there were suspicions of deceit.”
I asked the adviser how he had reacted. “I was very surprised,” he says now, “but not for a moment did I suspect anything. I thought we would do a check and thus remove the whole matter from the agenda. I was certain there was nothing to it.”
The incident seems to have affected him deeply. Despite the considerable time that has passed, his voice is still tinged with emotion and also sorrow. It emerges that he and three other scientists who were made privy to the secret conducted an examination of their own. After the doctoral student had set up the experiment and left it on the worktable, the group switched the labels of the experimental test tube and the control test tube. The blue color (attesting to the ostensible success of the method) should thus have appeared in the test tube marked as the control.
“The next morning,” the adviser relates, “when the student arrived happy and announced that the experiment had worked – namely that the experimental test tube, which was now actually the control test tube, had become blue – we understood. We did a few more similar checks, and the conclusion was unavoidable: This was fraudulent. It was awful. That moment when you understand…”
What do you do when you discover something like that?
“Many people advised me to end the whole business quietly, to kick the student out of the lab and leave it at that. I didn’t agree. My view was that the moment I sweep the matter under the rug, I become a party to the misconduct.”
He took the case to the university’s disciplinary committee. After an investigation, the student was found guilty and was expelled; a detailed account of the events was entered in his academic file.
“People implored me not to enter the information in the student’s academic file,” says the former adviser, “in order not to stigmatize him for the rest of his life. I could have considered that if he had gone for psychotherapy, as I suggested – after all, he was sick.
“What he did was pathological,” the adviser continues. “To come to all the labs at night, to make such a great effort without any logical thought. And by the way, that’s why I didn’t get angry with him at any stage, despite the damage he caused me and other researchers who were involved. He came from an outstanding family of scientists, with many prizes and achievements to their name, and he might have come under pressure to make similar striking achievements. But he didn’t go to therapy. In fact, he didn’t even admit to what he had done (except in two of the many cases), and the record remained as it was. It was very sad.”
Unfortunately, not all heads of laboratories who encounter similar cases behave as this adviser did. Students who are caught committing fraudulent acts in labs are generally expelled with no official inquiry and, concomitantly, without any additional sanctions being leveled against them. The story becomes more complicated in the case of a faculty member who comes under suspicion.
'What he did was pathological. To come to all the labs at night, to make such a great effort without any logical thought. But he didn’t go to therapy. In fact, he didn’t even admit to what he had done.'
On one hand, every university has a strict and detailed charter that sets forth the appropriate and obligatory response required should suspicions be raised against lecturers or professors. On the other hand, it’s not clear what actually happens, because universities are not eager to talk about the subject publicly. I had to be very insistent to get responses of any kind from universities in Israel (apart from one, which replied immediately). The desire to avoid publicity in this regard and the fear of the attendant damage to the institution’s reputation are understandable. The obligation to address the matter openly (in principle, there’s no need to name names publicly) should be equally understandable.
A scholar who has studied this subject in depth is Amalya Oliver, of the Hebrew University’s department of sociology and anthropology. Prof. Oliver is co-author, with Nachman Ben-Yehuda (another professor from the same department), of the 2017 book “Fraud and Misconduct in Research: Detection, Investigation and Organizational Response” (University of Michigan Press).
“Generally speaking, the universities have actually woken up to the issue” of scientific fraud, Oliver says, “especially in the wake of activity by the U.S. Office of Research Integrity [in the Department of Health and Human Services], which constantly urges transparency. That office publishes information about all the cases of fraud that have been discovered, and thereby makes it legitimate for universities to admit the existence of the phenomenon. Because two institutions in the United States, the National Institutes of Health and the National Science Foundation, are responsible for most of the research grants allocated to scientists worldwide, they can allow themselves to make their budgetary allocations contingent on the adoption of an ethical code of research by the applicant university. The goal is to establish warning lights, to define boundaries and to conduct an open discussion in order to create a socialization of integrity. The emphasis is on an attempt to prevent phenomena of fraud, and far less on punishment.”
Indeed, the sanctions applied against people who have transgressed aren’t very impressive.
Adds Oliver: “Universities don’t have clear standards in terms of the appropriate sanctions for each case of wrongdoing, and the punishments are surprisingly light. We surveyed all the cases of [scientific] fraud that have been published to date in the world – about 750 – and we found that in 30 percent of them the university temporarily suspended those at fault from academic activity, in 23 percent of the cases they were barred from ever submitting a request for a research grant; 11 percent of those found blameworthy resigned, and 8 percent were fired. A university prefers to have a person resign than to fire him, as that obviously makes less noise. A very small number were sentenced to prison. In most cases, the people continue to work in all kinds of organizations or to deal with closely related subjects.”
‘Publish or perish’
In trying to understand what motivates scientists to engage in fraud, it seems reasonable to assume that the standard explanations for such conduct in general – in the political or economic fields, for example – will be relevant here as well. They include: a narcissistic personality (“I am so superior, regular rules don’t apply to me”); psychotic traits (disregard of damage inflicted on others); hopes for economic benefit; and a blind desire for honor and recognition. At the same time, consideration should also be given to non-personal reasons, which are related to the greater research system – to the sociological-organizational structure of science.
One factor that comes up regularly in any survey of scientific fraud is the pressure exerted on scientists to publish articles. “Publish or perish,” goes the saying, referring to the relentless pressure to publish research in order to get tenure, garner academic promotions and obtain research grants. The expression dates from a 1928 scholarly article, and since then competition has only grown by leaps and bounds, as demands grow and resources dwindle.
“The more intense the pressure of the milieu, the more acts of fraud there will be,” a scientist who asked to remain unnamed told me. “You will find the greatest number of them at the most prestigious institutions, in the most intensely developing fields and in the most highly regarded journals.”
What’s happening at Chinese universities is an extreme case of particularly heavy pressure being exerted on scientists, and corrupts their research. Almost all the country’s universities require doctoral students to publish a number of articles, in select journals, even while they work on their degree, at the risk of not getting the degree. Not surprisingly, the frequency of cases of fraud and even bribery in the scientific realm is surging there.
The situation in the West is far less dire, of course, but it’s worth quoting theoretical physicist Peter Higgs, who said, with regard to the pressure to publish: “It’s difficult to imagine how I would ever have enough peace and quiet in the present sort of climate to do what I did in 1964.” He was speaking in 2013, the year in which he was awarded the Nobel Prize. He added, “Today I wouldn’t get an academic job. It’s as simple as that. I don’t think I would be regarded as productive enough.”
Not every research study gets published, of course. Editors of scientific journals tend not to publish articles about hypotheses that proved disappointing or experiments that ostensibly didn’t succeed. The result is that anyone who wants to get published has to obtain positive results, come what may. Fine results, meaning clear-cut and precise, are chosen for publication in the prestigious journals. In this context, it’s edifying to read the comments of one of the arch-fraudsters in social psychology, Diederik Stapel, from Holland, who simply fabricated research data that appeared in dozens of articles that earned high praise.
In an article published in The New York Times in 2013, for example, two years after he was exposed, Stapel recounted with surprising frankness that at the start of his career he was extremely frustrated by the chaos he saw in the data from experiments he conducted; only rarely was he able to draw clear conclusions from them. His obsession with symmetry and order, the article notes, “led him to concoct sexy results.” In Stapel’s words, “It was a quest for aesthetics, for beauty – instead of the truth” – an approach journals apparently found attractive. “They are actually telling you: ‘Leave out this stuff. Make it simpler.’”
To illustrate the reciprocal relations between the scientist and his surroundings, he pointed to his 10-year-old daughter who was sitting by the fireplace and singing a song welcoming St. Nicholas (aka Santa Claus). Children her age know that St. Nick isn’t really going to come down the chimney, Stapel told a reporter. “But they like to believe it anyway, because it assures them of presents.”
Tale of a skull
Ethics aside, fraudulent conduct in science can cause damage in the real world – to begin with, to the field and body of knowledge being dealt with. Falsified studies present erroneous data, which other researchers draw on to explain their findings and when thinking – mistakenly – about their work. The journals in which the articles were published may flag them as invalid (something that’s easier than ever in the internet age), but until that happens the articles constitute misrepresentations. Fabrications and falsifications frequently also have economic significance, because researchers invest money and time in attempting to replicate others’ experiments or to conduct follow-ups – and by the time they discover that they are essentially relying on a mirage, their resources may have dwindled. Fabricators themselves obviously also make improper use of the grants and other funding they have received; indeed, there have been cases where such researchers have been charged with embezzlement.
Moreover, fake studies from which concrete conclusions have been drawn may cause clinical damage, if physicians adopt the types of treatments recommended by them. There is no need to elaborate on the massive damage caused by the study that faked a connection between the MMR vaccine and autism. A thousand articles of clarification will not succeed in plucking out the stone that was thrown in the pond and sparked those endless ripples. The movement of opposition to vaccination that surged worldwide in the wake of the publication of Wakefield’s article did not relent even when the fraud was exposed: The refusal of increasing numbers of parents to vaccinate their children led to a rise in the number of dangerous outbursts of measles, rubella, mumps and whooping cough. The current resistance to the coronavirus vaccines echoes that situation of possibly lethal fraudulence.
False results can also have a ruinous effect on public policy regarding a certain research subject, as occurred in the wake of the research on twins by the British psychologist Sir Cyril Burt. In 1966 Burt addressed the age-old question of heredity vs. environment – which influences human intelligence more, nature or nurture? – and presented a study of 53 sets of identical twins who were separated at birth and grew up in surroundings of different social levels. The results showed clearly that the intelligence of each set of twins was very similar, regardless of the environment in which they were raised. Conclusion: The decisive element in the determination of intelligence is heredity, not one’s environment.
That was a blow to the scientific approach that underlines the importance of education in determining a child’s intelligence. For a long period, until it was discovered that Burt’s study was fraudulent, it took the wind out of the sails of educators and led to cuts in budgets for disadvantaged neighborhoods, based on the premise that there was no point in investing in schooling.
Sometimes, fraud gives a bad name to an entire scientific realm, such as in the notorious case of the “Piltdown Man” – the skull of a man bearing the jaw of an orangutang, which was supposedly found in a mine in the south of England. The skull was given to scientists in 1912 by Charles Dawson, a barrister and amateur archaeologist. As this was long before the existence of sophisticated scanning technology or DNA analysis, Dawson was able to fool the scholars into thinking that he had found the “missing link” in human evolution: a creature possessing humanoid and apelike features alike.
It wasn’t until the 1940s that chemical methods were developed that revealed that the human skull was actually 500 years old and the jaw no more than a few decades old. The revelation of this magnificent fabrication was fodder for the allegations of the deniers of evolution, supplying them with supposed proof of the falsity of the entire theory.
From here the way is short to the use of fraudulent information for the purpose of undermining science as such, to spread assertions that “everyone is a cheater” and that “even in science you can’t believe a thing” and so on, through an entire gamut of nullifying declarations. But, no: While in principle it’s perhaps enough for one contrary observation to refute a scientific theory, even hundreds of articles based on fake data are not enough to refute the credibility of science.
“Science progresses thanks to scientists, but its resilience comes not necessarily from them but from the scientific culture,” says Gil Levkowitz, the one-time undergrad who tried in vain so many years ago to replicate the method for breeding transgenic chickens, a method that afterward turned out to be fraudulent. Today, a veteran scientist in the Weizmann Institute, he speaks from a perspective of many years.
“Scientific culture is based on scientists publishing articles and allowing the discipline and time to prove whether they are right or not,” he says. “Mistakes sometimes occur in research. It’s also possible that we misinterpret information, and there are also cases of crass fraudulence, which stem from pressure, from economic interests, from all kinds of reasons that psychologists are invited to examine, because in the end scientists are human. But one way or the other – whether there are mistakes, or bias or fraud – science can accommodate everything.”
Levkowitz: “Scientific culture unfolds in a way that makes deception and mistakes ‘self-eject,’ especially in subjects that are of interest to enough people. If it’s an important study, people in different labs will try to replicate it, and if they don’t succeed, suspicion of error or fraud will arise. In any event, from the point of view of science, that study will disappear and be assimilated into the great body of knowledge. And if the fake article is not really interesting, and the subject is not important, then there’s a chance that the perpetrator will not be caught, because no one will try to work with his study. But in that case it’s not truly important: In any case, the lie will not have far-reaching implications for the world of science…”
And with regard to the truly important things and subjects that many researchers are engaged in, science will filter out the falsifications.
“Exactly. Scientific research is a continuing collective effort to discover the truth. The many participants involved are what drives it forward and also guarantee its integrity and validity. Scientists cast doubt constantly, not necessarily in order to discover frauds. They rely on each other’s findings, collaborate, and also compete with and criticize one another. The joint effort creates a type of checks and balances, which in the end impels science in the right direction.”