Regular readers of this blog, and the small number
of other autism science blogs, will have noticed how often research findings
are contradictory. In autism there is often
a good reason why things are contradictory, because in some of the underlying
dysfunctions both too much, or too little, (hyper or hypo) both lead to “autism”.
Take the relatively simple case of GI problems in autism; estimates of prevalence range from 9 to 91%. Such a variation is
of course absurd and on the basis of which study you choose to quote, you can
make whatever case you want.
Given that autism is not a single disease like say rheumatic fever, another very
odd inflammatory disease, it makes anything very much harder to
prove.
In the wider world of medical research there is an
ongoing debate about research studies in general. Most of this research is in much simpler
types of disease, where you can accurately include in your trial only people
with the same single biological dysfunction.
It has been suggested recently in the Lancet, one of
the world’s top medical journals, that perhaps as much of half of the findings
in research are untrue.
The case against science is straightforward: much of the
scientific literature, perhaps half, may simply be untrue. Afflicted by studies
with small sample sizes, tiny effects, invalid exploratory analyses, and
flagrant conflicts of interest, together with an obsession for pursuing
fashionable trends of dubious importance, science has taken a turn towards
darkness. As one participant put it, “poor methods get results”. The Academy of
Medical Sciences, Medical Research Council, and Biotechnology and Biological
Sciences Research Council have now put their reputational weight behind an
investigation into these questionable research practices. The apparent
endemicity of bad research behaviour is alarming. In their quest for telling a
compelling story, scientists too often sculpt data to fit their preferred
theory of the world. Or they retrofit hypotheses to fit their data. Journal
editors deserve their fair share of criticism too. We aid and abet the worst
behaviours. Our acquiescence to the impact factor fuels an unhealthy competition
to win a place in a select few journals. Our love of “significance” pollutes
the literature with many a statistical fairy-tale. We reject important
confirmations. Journals are not the only miscreants. Universities are in a
perpetual struggle for money and talent, endpoints that foster reductive
metrics, such as high-impact publication. National assessment procedures, such
as the Research Excellence Framework, incentivise bad practices. And individual
scientists, including their most senior leaders, do little to alter a research
culture that occasionally veers close to misconduct.
Can bad scientific
practices be fixed? Part of the problem is that no-one is incentivised to be
right. Instead, scientists are incentivised to be productive and innovative.
Would a Hippocratic Oath for science help? Certainly don't add more layers of
research red-tape. Instead of changing incentives, perhaps one could remove
incentives altogether. Or insist on replicability statements in grant
applications and research papers. Or emphasise collaboration, not competition.
Or insist on preregistration of protocols. Or reward better pre and post
publication peer review. Or improve research training and mentorship. Or
implement the recommendations from our Series on increasing research value,
published last year. One of the most convincing proposals came from outside the
biomedical community. Tony Weidberg is a Professor of Particle Physics at
Oxford. Following several high-profile errors, the particle physics community
now invests great effort into intensive checking and re-checking of data prior
to publication. By filtering results through independent working groups,
physicists are encouraged to criticise. Good criticism is rewarded. The goal is
a reliable result, and the incentives for scientists are aligned around this
goal. Weidberg worried we set the bar for results in biomedicine far too low.
In particle physics, significance is set at 5 sigma—a p value of 3 × 10–7
or 1 in 3·5 million (if the result is not true, this is the probability that
the data would have been as extreme as they are). The conclusion of the
symposium was that something must be done. Indeed, all seemed to agree that it
was within our power to do that something. But as to precisely what to do or
how to do it, there were no firm answers. Those who have the power to act seem
to think somebody else should act first. And every positive action (eg, funding
well-powered replications) has a counterargument (science will become less
creative). The good news is that science is beginning to take some of its worst
failings very seriously. The bad news is that nobody is ready to take the first
step to clean up the system.
Ten years ago John Ioannidis, a well-known Professor of Medicine now at Stanford, published a much more complex paper saying essentially the
same thing. That paper, below, was recently highlighted
to me by one reader of this blog, who is involved in such research.
Conclusion
Since modern medicine, and indeed some autism therapy, is all about being
“evidence based”, this does pose a big problem.
When the experts are telling us not to trust the evidence, what/who do you
trust?
Getting into debates about almost anything to do with autism, with
anyone, usually gets you nowhere at all.
Even the most basic points are disputed. This is at least one thing that
people can agree on.
In the end it comes down to your own experience and judgement; I hope it is good.
In spite of all its shortcomings, I continue to marvel at how easy it is to access what you want from the vast amount of accumulated research, draw your own conclusions and act on them. I wish more people did the same.
In spite of all its shortcomings, I continue to marvel at how easy it is to access what you want from the vast amount of accumulated research, draw your own conclusions and act on them. I wish more people did the same.
This is totally off the topic of this blog, but I'm looking for an answer here. I have put my little boy in the clinical trial for Curemark's CM-AT. The chymotrypsin levels are blinded, as well as placebo or meds. He started on Monday and he is behaving worse! Do you know if this treatment has a period of regression before they start improving? I'm not saying you should know this, but I'm reaching out to anyone that can tell me if I've got a light at the end of the tunnel.
ReplyDeleteI would not worry about this. It might be a coincidence that he is behaving a bit worse.
DeleteCM-AT seems to be a mixture of various pancreatic enzymes that are suggested to affect the production of various other substances required by the body. Even if CM-AT was effective, it would not be surprising if these enzymes produced some negative effects in the short term.
If your boy is not lacking these enzymes and their eventual products, he might then react to this excess.
Given the nature of autism, even if CM-AT "works", it will likely only work for a certain percentage of cases. If 30% were to be responders that would be a great result.
So I would continue with trial and only consider dropping out if things get really bad (self injury, loss of speech etc).