Resolving Conflicting Research Results: Vaccine Education is Tricky
Note: This post also appears on Insight, the official blog of the Skeptic Society.
A few months ago I wrote about the psychology of vaccine denial. In the post I discussed two publications, one of which (Nyhan, et al.) found:
Corrective information reduced misperceptions about the vaccine/autism link but nonetheless decreased intent to vaccinate among parents who had the least favorable attitudes toward vaccines. Moreover, images of children who have MMR and a narrative about a child who had measles actually increased beliefs in serious vaccine side effects.
None of the interventions increased parents’ intent to vaccinate.
Then, a couple of weeks ago, a friend sent me a link to this piece describing research which seems to contradict that finding. The authors (Horne, et al.) concluded that
…highlighting factual information about the dangers of communicable diseases can positively impact people’s attitudes to vaccination.
These two conclusions seem to contract each other. Which should we believe?
Many times this question comes down to the quality of the research. In this case, I believe these are both fairly well-designed studies. One, however, is more precise than the other in several ways. I believe that precision highlights the complexity of the issue as well as giving us a better idea of the direction that vaccine promotion should take.
Let’s look at the differences in sampling and method between the two studies.
The Horne study sampled 315 men and women. In the Nyhan study, the final sample was 1759 parents with children under the age of 18. In most research, 315 subjects is more than sufficient and more is not always better. The danger in larger samples is to find effects that are statistically significant, but not practically significant. However, when comparing conflicting findings, it is best to bet on the side of the larger sample.
Then there’s the question of limiting the study to parents. Although Horne compared parents to non-parents and found no significant differences in attitudes or effects, noise is noise. These two groups of people vary, and the attitudes of non-parents are not particularly relevant. Limiting the study to parents would give me more confidence in the robustness of the findings and their application in real-world activism.
Still, if both are reasonably well-designed studies by competent researchers, the end results should not contradict each other. So there must be more going on. And there is.
For one thing, this is a great example of how complex social sciences are. We should never make policy decisions based on a single study and this demonstrates why. Replication, especially with variants of measures and materials, is essential to learning the best methods of persuasion.
For another, these studies differ in more than just sampling techniques. The Horne study is much simpler and, in fact, oversimplifies. Nyhan, et al. included three outcome measures, each addressing a specific attitude:
- The belief that vaccines cause autism.
- Perceived risk of side effects from vaccines.
- Intent to vaccinate one’s child/children.
By contrast, the Horne study involved a single measure which combined answers to five specific questions (such as “I intend to vaccinate my child.” and “Doctors would not recommend vaccines if they were unsafe.” to come up with a more vague “vaccine attitudes” scale. Even if the answers to these questions are highly correlated, how interventions affect those answers may be very different. They certainly were in the Nyhan study. And if “effective” is defined as increasing intent to vaccinate, then the Horne study does not answer the question it purports to answer. Personally, I am more interested in intent to vaccinate than I am in any other aspect of “vaccine attitudes”, so the Nyhan study’s findings are much more meaningful to me.
In general, it is best to measure outcomes of interest as specifically as possible, but of course the more outcomes a researcher studies, the larger the sample must be.
Finally, and perhaps the most important difference between these two studies, is the timing of the experimental portion. When measuring the effect of treatments or interventions on attitudes, an experiment should be spaced over time. A researcher will measure the attitude, then wait before applying a treatment and measuring the attitude again. When polled about attitudes, those attitudes are brought to mind. This affects our receptiveness to relevant information in complex ways, ways that vary based on a number of other factors such as the strengths of our attitudes and the way the questions are worded. However, allowing subjects to forget about the initial survey provides a more accurate picture of how people confronted with information in the real world may respond to it.
The Horne experiment was conducted a day after the initial screening while the Nyhan experiment occurred about two weeks after initial screening.
My conclusion? I think the issue is complex, but while Horne’s findings appear easier to understand, Nyhan’s findings are more specific, answer more interesting questions, and can be more easily viewed within the framework of well-established knowledge about human decision-making (e.g., cognitive dissonance).
That, and we need more research if we are to develop effective ways of increasing vaccination rates.
Horne Z, Powell D, Hummel JE, & Holyoak KJ (2015). Countering antivaccination attitudes. Proceedings of the National Academy of Sciences of the United States of America, 112 (33), 10321-4 PMID: 26240325
Nyhan B, Reifler J, Richey S, & Freed GL (2014). Effective messages in vaccine promotion: a randomized trial. Pediatrics, 133 (4) PMID: 24590751
I enjoyed your blog post. I think you raise some important points about how to interpret our results and Nyhan’s results. I do feel that other experimental issues, which are slightly more subtle, were glossed over in this overview. However, I understand that you cannot necessarily get into the nitty-gritty in such a post.
If you have question-level concerns about our findings, I’d encourage you to check out the data here https://osf.io/nx364/
-Zach Horne
I greatly appreciate the link to your data, although I have no real concerns about your findings. As I noted, I think the research was well-design and I have no serious criticisms (which, as anyone who reads the blog will attest, is a huge compliment) of the research itself.
And, yes, this wasn’t meant to be a thorough discussion of method, just a quick overview to show laypersons how the findings of two studies can appear to contradict each other even when neither is badly flawed.