The scientific method is the best approach we have to study and learn about the physical and natural world. When new knowledge is gained and comes to be the best available evidence at the time (until new, more accurate or in some other way better evidence becomes available), one would hope that the new knowledge finds its way quickly into the relevant disciplines, that practitioners take note and incorporate it into their practice, procedures and policies. In the health area, this time lag has been assumed to be 17 years – but we don’t really know, and “further research is needed”, as they often write in research articles.
There are many reasons for the length of the health research translation process. One of these is conflict of interest. A recent article in MJA InSight demonstrates this nicely. The article is titled “Prostate cancer: urologists fight back”.
We have known for some time that – from a population perspective – screening for prostate cancer and the resultant surgical procedures have overall little benefit for men. Two recent studies have now shown that for men with early prostate cancer, prostatectomy (i. e. surgery to remove all or part of the prostate gland) did not result in reduced mortality, but left many with nasty side effects.
Two recent clinical trials, Prostate Testing for Cancer and Treatment (ProtecT) and Prostate Cancer Intervention versus Observation Trial (PIVOT), completely undermine the stratospheric spin associated with prostate cancer being a death sentence. They are unambiguous in their implications.
The bottom line? Men with early stage abnormalities of the prostate who do not undergo surgery or radiation treatment, but whose condition is monitored for any progression of the cancer, live just as long as men who opted for complete removal of the prostate and who now live with its immediate consequences, including incontinence, intimacy issues, bowel problems and intervention regret.
This should be good news for older men. But they may never be told.
The MJA InSight article quotes prominent urologists who appear to have difficulty accepting the new evidence. Instead, they dismiss the two studies as being flawed.
Besides, a radiation oncologist claims that the surgeons are gatekeepers who often don’t refer higher risk patients to radiotherapy, which – she claims – is as effective as surgery:
There’s a massive financial conflict of interest there, because they don’t have a vested interest in referring men on to a radiation oncologist. They lose income if someone chooses a non-invasive intervention. People are reluctant to say it, but that’s the elephant in the room.
But might radiation oncologists have conflicts of interest as well?
Meanwhile, it may be worth pondering the results of a US study, which compared the recommendations of urologists and radiation oncologists for the treatment of localised prostate cancer. Surprise, surprise: for the same cases, the specialists overwhelmingly recommended the treatment that they themselves delivered.
I argue there are parallels to animal experimentation. Animal researchers have built their careers on experimenting on animals. That’s their area of expertise, that’s the subject of their publications and conference talks, that’s how they make their living. In universities, the pressure to publish or perish is such that researchers rarely have the luxury to take time out for learning new non-animal, human-relevant methods. Operating on mice and using advanced computer-modelling techniques, for example, are quite different skills.
Grants are won on the basis of prior experience, and the peer review system “punishes researchers with innovative projects that may be risky, but could be highly successful”. Doing things differently and taking risks doesn’t pay:
Well established investigators with mature projects produce large amounts of preliminary data for applications. However, younger researchers (who completed their PhD less than 15 years previously) with new research programs or groundbreaking research, struggle to generate similar volumes of data; their teams are smaller and have less funding; they take more risk and this leads to lower success rates in obtaining funding.
Also, it takes a special person to be able to acknowledge after a career in a particular area that much of their work was of limited use. Dr Elias Zerhouni, ex-director of the US National Institutes of Health (NIH) had this to say:
We have moved away from studying human disease in humans,” he lamented. “We all drank the Kool-Aid on that one, me included.” With the ability to knock in or knock out any gene in a mouse—which “can’t sue us,” Zerhouni quipped—researchers have over-relied on animal data. “The problem is that it hasn’t worked, and it’s time we stopped dancing around the problem…We need to refocus and adapt new methodologies for use in humans to understand disease biology in humans.
The pressure to publish for the sake of publishing can lead to dreadful research. Dreadful because of its cruelty in the treatment of animals, and dreadful because it is a great waste of limited resources. This page on the Retraction Watch website critiques one such study.
The evidence for the limited value of animal experimentation is accumulating. Some point the finger at inferior study design in animal research, or more broadly a lack of scientific rigour, compared to studies that involve humans, while others identify species differences as responsible for the poor predictive value of animal models. For further links to studies that highlight why animals are not good models for human medicine, go to this website and search for the keyword “bias” (without the quotation marks).
Why do we let vested interests, financial or otherwise, have such a detrimental influence on the allocation of resources for biomedical research? That might be a topic for another blog post.