The Shortcomings of Evidence-Based Practice
I’m finding myself increasingly annoyed by the emphasis on evidence-based practice in the autism world. This is not because I don’t see the value in responsible intervention practices that are consistent with research and theory. There are still far too many people who are taken in by alternative pseudoscience, quack treatments, and faith healing in a vain attempt to “cure” autism. Even if we leave aside the important ethical question of whether a “cure” for autism itself is truly desirable,1 these complementary & alternative practices are invariably ineffective,2 frequently expensive, and sometimes quite dangerous. To protect autistic individuals who are subjected to these practices, and to protect families whose desperation allows them to be exploited by peddlers of miracle cures, we of course must continue to speak out against these “alternative facts,” especially as the internet age makes it so much easier to spread misinformation.
However, I’m not convinced that championing “evidence-based practices” is the most useful way of confronting quackery. Having a randomized-controlled trial showing that your intervention does what it is supposed to is great, but there are some areas where quantitative evidence runs into limitations.
Problems with the Focus on Evidence-Based Practices
One important question about an evidence base is the type of outcome that is being investigated. There are many outcomes we can measure, but are all of them the right outcomes to measure? For example, lots of studies look at autistic features (“autism severity”) and suggest that making somebody less autistic (as it were) is desirable in and of itself – but research is now showing us that hiding one’s autistic features can be associated with poorer mental health.3 Clearly, we need to think critically about the sorts of outcomes that matter.
Another important question is that of measurement itself. We’re fairly good at measuring things like autistic features and IQ, but we’re far from perfect even there. I’ve previously grumbled about all the issues I see with autism diagnostic measures, and there’s plenty wrong with cognitive assessments as well. The difficulties of measuring IQ in what we would now call autism were noted as early as 1935. Today, we know that there are big discrepancies between different cognitive ability measures when they are applied to autism. And this is what we’re good at! I think we’re a lot less good at measuring things that have to do with people’s subjective experiences, like their subjective quality of life and happiness, or their sensory experiences. This becomes especially difficult given that some autistic people cannot use language to describe their experiences.4
Moreover, some interventions are harder to study than others. The interventions we study in research tend to be things like drug trials, or interventions delivered by a clinician or professional for a certain number of hours per week for a certain number of weeks. These are relatively uncomplicated to organize.5 However, while a few months of occasional cognitive-behavioural therapy sessions might indeed be helpful, there are bigger questions. For example, what is the right educational environment for a child? This is a question that will pervasively affect someone’s daily experiences, and of course its effect will be more impactful than a few brief therapy sessions. Right now, we have very little evidence about this question, because it is something that can’t be easily studied by randomizing people, giving them a quick treatment for a few weeks, and then seeing what happens.
In fact, I think just about the only interventions studied using randomized controlled trials that really seek to fundamentally transform the daily environment around the autistic person in an ongoing way are the (highly controversial) early behavioural interventions. Not all early intervention studies reach this level of intensity, but some will deliver intervention for fifteen hours a week or more – sometimes as much as forty hours – over a reasonably long time. They are, in a sense, an exception that proves the rule.
There are still more problems with the primacy given to evidence-based practice. Even if it is easy to study a given intervention or measure a given outcome, researchers might not be interested in doing so. As a researcher, I’m entirely aware that I approach the world in a biased manner at times, and from what I’ve seen other researchers are just as biased in their own ways. That’s probably why, for example, there are so many studies looking at autistic features and so few looking at other outcomes that are arguably more relevant to an individual’s quality of life.
Evidence-based practice is also something that typically comes at the group level. We decide that a given intervention is supported by evidence for a given population. However, individuals are unique. People may learn their own coping strategies. They may learn what works and doesn’t work for them. There are individualized models of support which give individuals and their families freedom to hire people to try what works for them. As long as we don’t stray into quackery, and as long as we help people confront any irrational superstitions they might develop, there is much that is attractive about this model. Who better than the individual themselves to decide what will work for them? But an individual’s own self-knowledge, however rich, is not “evidence” in the sense implied by the term “evidence-based.”
Thus, there seem to be many areas where quantitative evidence alone is insufficient. It doesn’t give us enough information to properly gauge the acceptability and desirability of different practices. If an agency or government is making decisions about what sorts of supports should be funded and what sorts of support should not, the evidence base for supports would not be suitable as the sole criterion. If we funded supports solely on the basis of evidence, we’d have to abandon a whole bunch of practices that we know, on the basis of theory and ancedotal experience, work well. We need to offer people more flexibility than that.
I would even argue that the term “evidence-based” gives us a weak defence against quackery. The limitations of our existing evidence base are evident to more people than just me. Moreover, research is complicated and there are only so many researchers around. Realistically, we’re not going to ever be able to develop the sort of evidence base for all the things we think work that we would like, nor are we going to be able to disprove all the quack cures we would like to disprove.6 Thus, if we start pointing fingers at untested quack treatments and saying they aren’t evidence-based, the quacks can reply with perfect accuracy that there are plenty of other untested things out in the world that we don’t mind at all.
What’s the Alternative?
Instead of just emphasizing the necessity that things should be evidence-based, I think we need a more nuanced understanding of responsible practice in intervention and support. Having quantitative empirical evidence from well-designed intervention studies is great, and we shouldn’t overreact by rejecting empirical evidence as being somehow fundamentally flawed. That would be absurd! The quantitative outcome of a good randomized controlled trial, or better yet a meta-analysis of the outcomes of multiple such studies, allows us to make conclusions with a high degree of confidence. Ideally, we’d want to have RCTs to answer every question we could think of! However, we don’t live in that ideal world and therefore we shouldn’t go to the opposite extreme of dismissing theoretical argument and qualitative experience.7 These sources of knowledge have their own important role. This is especially true when we are speaking of people’s real-world experiences: I would always take a theoretical argument about the molecular biology of a given neurotype with a grain of salt, but if a person with a given neurotype starts telling us that in their own experience some strategy has helped them or hurt them, that deserves some consideration. Of course, such experiential and theoretical knowledge must be treated with plenty of skepticism as well, but if enough people say something that makes enough sense, it can offer important insights that the slow, clunky empirical evidence machine won’t be able to quantitatively prove until much later.
- It is important to note here that a cure for autism itself is very different from a cure for co-occurring conditions. The ethics of “curing” any co-occurring condition, such as a health condition, mental health condition, etc., can often be viewed separately from autism, especially as many interventions will primarily target a relatively small domain, with any effects on other domains being incidental.
- Barring placebo effects.
- Of course, in those studies where autistic features decrease, other outcomes of interest – like development of language and functional communication skills – may change as well. But if these are the outcomes we care about, should we not focus on them directly?
- That being said, if a double-blind randomized controlled trial does end up showing that groups differ in change on some measured variable, there is clearly an effect. We might not interpret it exactly correctly, and our imperfect measure might have underestimated its size, or there might be other problems, but the effect would still be there.
- I say relatively because they can still involve veritable legions of clinicians, lab managers, graduate students, and other personnel, at least if the project enrolls a large enough sample. Research is hard work!
- Besides, some quack cures are dangerous, and it would be unethical to expose people to them.
- Some people would include these in a chart of “evidence-based practices” as lesser forms of evidence, but in practice, a system that emphasizes evidence-based practices is going to focus pretty much exclusively on quantitative empirical research.
2 thoughts on “The Shortcomings of Evidence-Based Practice”
Four months after you wrote this article, I noticed and read it. You make a lot of good points but I think you should read the paper written in August 2020 by Cathie Davies. I am forwarding the link to this article to Cathie Davies to see if she has time to comment.