From a NY Times article which appeared last week:
Some years ago, Dr. Robert A. Burton was the neurologist on call at a San Francisco hospital when a high-profile colleague from the oncology department asked him to perform a spinal tap on an elderly patient with advanced metastatic cancer. The patient had seemed a little fuzzy-headed that morning, and the oncologist wanted to check for meningitis or another infection that might be treatable with antibiotics.
Dr. Burton hesitated. Spinal taps are painful. The patient’s overall prognosis was beyond dire. Why go after an ancillary infection? But the oncologist, known for his uncompromising and aggressive approach to treatment, insisted.
“For him, there was no such thing as excessive,” Dr. Burton said in a telephone interview. “For him, there was always hope.”
On entering the patient’s room with spinal tap tray portentously agleam, Dr. Burton encountered the patient’s family members. They begged him not to proceed. The frail, bedridden patient begged him not to proceed. Dr. Burton conveyed their pleas to the oncologist, but the oncologist continued to lobby for a spinal tap, and the exhausted family finally gave in.
As Dr. Burton had feared, the procedure proved painful and difficult to administer. It revealed nothing of diagnostic importance. And it left the patient with a grinding spinal-tap headache that lasted for days, until the man fell into a coma and died of his malignancy.
The oncologist’s intentions were good, but he cared so much for the welfare of his patients that it clouded his judgment about what was best for his patients. The goal he wanted to accomplish was driven by his values, as most goals are, but his ability to accomplish that goal was hindered by the same values.
In the past month alone, I have seen good skeptics deny consensus science, cherry-pick, hyper-rationalize, and engage in a number of poor practices in order to justify their decisions or actions. In the past few years, I have noted an embarrassingly large number of occasions in which skeptics have charged forward with ideas in ways I consider to be counterproductive and, in some cases, potentially harmful – giving talks and workshops without an appropriate amount of knowledge on the subject, staging meaningless protests simply because they’ve gained attention, or wasting resources conducting surveys and experiments without clear goals, training, or regard for issues such as the ethical treatment of human subjects. I am sure that these skeptics were motivated by a desire to make a difference – a desire to do something. However, ideology, values, passion, and beliefs got in the way of good reasoning. For example, last year a group of skeptics, angry that an anti-vax rally starring Wakefield was going on in their town, charged forward without consulting an expert and distributed a number of fliers which said, in part:
Vaccines…don’t cause diseases or disorders or distress or dystopia. In fact, receiving vaccines is completely safe.
I don’t think I need to go into the possible ramifications of this mistake.
Skepticism, as a movement, promotes critical thinking, careful consideration of evidence, and attention to details which are easily missed. When skeptics fail to apply those same principles to the work their actions are, at best, wasteful and, at worst, potentially harmful.
I found myself scratching my head last week when D.J. Grothe posted this article to Swift titled Should skepticism be divorced from values?. It was a surprise for two reasons. 1) On most matters of the philosophy of skepticism and even activism, D.J. and I are in near-total agreement, yet I did not agree with this piece at all. 2) It seems to contradict some of D.J.’s statements, particularly those he has made on the stage at various events.
In an effort to better understand, I entered a conversation on Facebook and tried to explain my confusion as well as my opinion. I failed. D.J., no doubt drawing on experiences in conversations with me and others about similar topics, is certain that we agree and that talking it out will eventually lead us both to see that. I am not as confident. Although I do not doubt that D.J. will agree with nearly everything in this post, I think we will remain divided on an important point.
First, let me declare now that I have a tremendous respect for D.J. At every event he seems to find ways to communicate the most important fundamentals of organized skepticism, facts that new participants need to know (and seasoned skeptics need to remember) such as organizational scope, tolerance, and integrity. He does so without apology. He is also one of the best panel moderators and interviewers I have ever seen. He asks tough questions without blinking and, when those questions are not directly answered, he steers the conversation in the direction intended. That said, the post bothered me and not a little bit.
Second, I will not use the term “divorced” because I don’t feel that conveys an appropriate level of distance (so, in that sense, one may split hairs and say that D.J. and I agree). I will use “separate”.
A little bit of background
In the post and afterward, D.J. notes that the post is a reinforcement of his 2010 NECSS talk – a talk I quite like. There are elements of the talk of which I take issue, but overall I feel that it is a good “initiation” talk for new skeptics. I would summarize the talk this way:
“Skepticism” refers to a method for evaluating claims, but it also refers to a movement. The movement is a type of humanism. It is a type of humanism because those who began it did so for humanitarian reasons. To Randi, it’s just the right thing to do. This humanism drives me (D.J.) and most others I know; we debunk and educate because pseudoscience is harmful. We share the value that to know reality is to avoid such harm. In order to do this work, we must also have a strong mind – the kind of mind that allows us to see reality as it is and not how we would like it to be.
I don’t think that D.J. would disagree with this summary, but perhaps his emphasis is different from mine. I would expect that. As an instructor and researcher, I have focused on the importance of objectivity and how to achieve it. As an activist, D.J. has focused on the reduction of harm.
However, I believe that D.J.’s Swift post differs significantly from this talk and introduces a serious problem in an attempt to emphasize the humanistic goals of organized skepticism. The problem is in the title and is the theme of the post – a theme I do not believe it shares with the NECSS talk. Regardless of D.J.’s intended message, I feel very strongly that this post sends the wrong message – a message that it’s okay (maybe even important) to allow one’s “moral indignation” to dictate how the work is done. It’s not. In fact, it’s more than just not okay. A core property – THE core property – of good science is objectivity. Values are important. Values motivate us to act and provide us with goals. However, values, practically by definition, erode objectivity almost universally.
In a comment on Facebook, D.J. stated:
I don’t want the position that one must separate ethics from her skepticism to gain ground. It’s both wrong, and also counter my goals.
Ethics are a slightly different issue and very domain-specific. D.J. specifically described a moral imperative (to mitigate the harm that pseudoscience causes), so in my mind “moral values” replaces “ethics” in his sentence and I address it as such.
The problem of values
It is actually unethical, in my opinion, to fail to separate one’s moral values from the work.
Is it ethical for a pharmacist to refuse to sell contraceptives, yet expect to be allowed to continue in that career? Is it ethical for a doctor to tell the parents of a fifteen year old victim of incest about her pregnancy because he believes that she’s lying and the father is the head of the household?
Those are, of course, examples of situations in which the values conflict with the work. However, there are many, many ways in which the same values that motivate people to pursue a career or volunteer work hinder their ability to do that work well. This is more obvious in some careers than others; some that come to mind immediately (other than the most obvious, scientists) are doctors, politicians, judges, journalists, and teachers. For example, would it be it ethical for the teacher who wrote this letter to fail to teach evolution because he feels that it makes kids “think like atheists”, something he feels is harmful to kids? Is it okay for a journalist to slant a story rather than simply report the facts?
One of D.J.’s comments sums up the differences between us, I think. He wrote:
I do not favor letting the suggestion stand that the method of skepticism should be practiced in a value-neutral vacuum.
Insisting that practitioners separate their values from the work is not even close to creating “a vacuum”. The “moral imperative” provides both motivation and a general purpose (e.g., “to reduce or eliminate the harm caused by pseudoscience”). However, that is where the role of values should end. I contend that any practice of skepticism that does not strive to be value-neutral is contradictory, counterproductive, hypocritical, and generally just bad.
Another of his comments reads, in part:
If what you are saying is that the work of skepticism should be practiced in a value-neutral way, and that our priorities as skeptics should not be informed by our ethical commitments (as an example, defrauding someone of their nest egg with fake psychic claims is equivalent to your grandpa thinking he can dowse in your backyard) then I disagree.
Well, this is quite a loaded statement, but I am saying something very much like this. I am certainly not saying that those two examples are equal, but I am saying that priorities should be informed by facts. One of those facts is a goal or set of goals which were derived, in part, from values. Once a general purpose or mission is defined, the question of priorities is epistemological; we need to know which projects best meet our goals. The examples D.J. provided are easy to compare, but what about the more difficult comparisons? Which should be a higher priority, rallying people pass out fliers at a “talk” by the author of an anti-vaccine book or producing materials to be used in classrooms to teach kids how to evaluate claims? In both the easy and the difficult scenarios, the choice should be driven by the organizational goals (facts) and information about how each scenario meets those goals (more facts). Values should be set aside because they impair our ability to perceive, process, and remember facts.
Recognizing one’s motivations and separating them from the process of reasoning is a fundamental part of both science and skepticism.
If you think about the psychologists who have spoken at TAM and other events, most of the topics covered are d to the myriad of ways that human beings err in receiving, recording, remembering, and processing information about the world. It is precisely because we are so bad at this that we need science. And it is precisely because we are so bad at this that skeptical activism exists.
The examples we use to demonstrate these flaws are usually a bit removed from daily life. Visual illusions, pareidolia, and probability problems do not always show how subtle the reasoning problems can be. Consider this example from a recent Scientific American blog post Lessons from Sherlock Holmes: Trust in The Facts, Not Your Version of Them (bold mine):
When we look around us, what is it that we see? Do we see things as they are, or do we at once, without thinking, begin to interpret? Take the simple example of a wine glass. All it is is a transparent object that holds a liquid–which we know by experience should be wine. But if we’re in a store and late for a party? It’s a present, an object of value and beauty for someone else to appreciate. At home and thirsty? It becomes, perhaps, a water glass, if nothing else is available. Bored? A toy to turn around and around, seeing what reflections we can see, how we can distort our own face on the curved surfaces. Solving a murder? Potential evidence of some final, telling pre-death interaction–perhaps the victim took a final sip before he met an untimely end.
Soon, instead of saying there is a wine glass on the table, you say the victim’s glass had been empty at the time of the crime. And you proceed from there. Why was the victim drinking? Why was he interrupted? Why had he placed the glass where it was? And if it doesn’t make sense? Impossible. You’ve started with a fact and worked your way forward. It must fit. The only thing is, you’ve forgotten that it was just a glass to begin with. The victim’s? Maybe not. Placed there by him? Who knows. Empty at the time of the crime? Perhaps, but perhaps not. You’ve imbued an object with a personal take so naturally that you don’t realize you’ve done it. And that’s the crucial–and sometimes fatal–error, of both reasoning and world perception. A pipe is never just a pipe.
Hardly ever, in describing an object, do we see it as just a valueless, objective wine glass. And hardly ever do we think to consider the distinction–for of course, it hardly ever matters. But it’s the rare mind that has trained itself to separate the objective fact from the immediate, subconscious and automatic subjective interpretation that follows.
The way our perceptual and cognitive systems operate allows us to function in the world, but higher-order thinking requires recognizing the flaws in this system and correcting for them. This is the “strong mind” that D.J. was talking about in his NECSS talk. Most skeptics are intimately familiar with the confirmation bias, which is the tendency to notice, remember, believe, and assign more weight to information that is consistent with our current beliefs than neutral or conflicting information. This bias is one of many biases and heuristics, but it is arguably the one that does the most damage to our ability to reason well. What many skeptics may forget is how many of our beliefs are ideological – driven by moral values and opinions more than facts. These beliefs are even more difficult to set aside because they embody what we wish to be true more than simply what we think is true. So it is even more important to separate ideology from epistemology and decision-making than other beliefs.
Most readers are familiar with the thought experiments in moral reasoning which provide a framework for the practice of solving moral dilemmas, but they illustrate my point well. A variant of “the trolley problem” is particularly relevant:
A train (trolley) is running out of control down a track. In its path are five people who have been tied to the track by a mad philosopher. Fortunately, you are on a bridge under which it will pass, and you can stop it by dropping a heavy weight in front of it. As it happens, there is a very fat man next to you – your only way to stop the trolley is to push him over the bridge and onto the track, killing him to save five, but you know for certain that it will work. You do not weigh enough to stop the train, so simply jumping is an act of suicide that will not save the people. Nobody will see you push him, so there are no social or legal consequences to consider. Would you push him?
This is a very tough choice. On the one hand, your ability to act in this situation alone makes you morally obligated to act, at least according to many. Failing to act is an action in and of itself; you’ve allowed five people to die. Pushing the man off the bridge is an act which can only be considered murder. The most morally-correct decision is generally considered the utilitarian decision to throw the fat man over, yet few people make that choice. [NOTE: I am fully aware that some argue about whether utilitarianism is truly rational and I will not discuss those issues here. I will just say that these scenarios severely limit the number of possible strategies and force a choice between them.]
This is, admittedly, grossly oversimplified moral reasoning without an epistemological context, but it is not difficult to add such context. In fact, this exercise was, ironically, part of a recent study that provides that kind of context in addition to explaining what’s wrong with using the problem as more than an illustration.
Bartels and Pizarro presented students with a series of bridge-style exercises, including a “fat man” version the trolley problem. What they found was that the rate of utilitarian responses were positively correlated with measures of psychopathy (someone high in psychopathy will be low in empathy and relatively anti-social) and machiavellianism (the degree to which an individual is emotionally detached, cynical, and manipulative).
If you don’t understand the study, as the media clearly didn’t (big surprise), you might be tempted to conclude (as the media did) that people who care little about others can make the best decisions about what is best for the majority. This is an ugly finding that many people are likely to reject, simply because they don’t like it. Science doesn’t work that way. Science is about truth, not values, and sometimes the truth is just not pretty. Scientists who fail to separate their values and motivations from their work fail to interpret evidence appropriately (or form good theory). The same is true for skepticism.
However, when viewed in the context of the literature on moral judgments, the finding is not about the characteristics of reasoners, but the use of these exercises to measure moral reasoning:
Our study illustrates that the widely adopted use of sacrificial dilemmas in the study of moral judgment fails to distinguish between people who are motivated to endorse utilitarian moral choices because of underlying emotional deficits (such as those captured by our measures of psychopathy and Machiavellianism) and those who endorse it out of genuine concern for the welfare of others and a considered belief that utilitarianism is the optimal way of achieving the goals of morality.
Now, I think that there is something missing from this study that would likely wash out the effects, namely that the sample of college students is likely to be filled with people who have not yet spent much time thinking about moral dilemmas. In fact, a 2008 study suggests that most undergraduate students do not even know what a moral dilemma is. There are exceptions, but many students might have genuine concern for the welfare of others, but fail to recognize utilitarianism as an optimal choice at this time in their lives. I suspect that, given a sample with a wider age range, the effects would be reduced or disappear as the proportion of caring utilitarians increases.
Even with such a sample, though, the authors’ conclusions in regard to their purpose stand because, in part, the scenarios do not consider how the individual arrived at the choice. A common problem in studies of cognitive processing is that arriving at the prescriptive answer is no guarantee that one has followed good reason to get there. Consider the atheist who endorses alternative medicine (*cough* Bill Maher). The exercises are easily reduced to a simple math problem. What they have measured is one’s ability to determine the “morally correct” course of action given a specific scenario, not whether they have adopted the moral values that we assume are embodied in that choice.
So what does this have to do with skepticism and values? Let me explain by telling you what I would predict if I could observe participants in real-life situations as described by the “fat man”. I believe that the psychopaths and machiavellians would fail to push the fat man. They may know that this is the best moral choice, but not care. They have no motivation to act. The result is failing to save four people (net).
So, I believe that values are extremely important because they motivate us to take action. However, which action is best? The individual who is unable to separate their values from the choice they have to make, at least according to and this many other studies, usually fails to make the utilitarian choice in any case. If you do not know what the best choice is, how can you take the best action? (This, by the way, is what is meant by “informing values”. I contend that we can only do so by first setting them aside.) I’d predict that those who both value the lives of others and are able to set that value aside and solve the problem objectively are much more likely to take action than either of the two others. In less restricted, real-world scenarios, these are the people who take the actions which are most likely to lead to positive change.
The consequences of value-driven actions
Humanism is an ideology which drives us to promote skepticism. That same ideology drives others to a long list of careers and activities, from social worker to clergy to homeopathic product sales. Secular humanism may reduce that group to atheists and agnostics, but my point here is that humanism is not why we promote skepticism. It’s why we want to help people. We promote scientific skepticism for a number of reasons, some of which are shared, such as the belief that it is the best way to evaluate claims. Some other reasons to choose skeptical activism as a means of helping people are that we find it interesting or have a specific skill set which can be of use. However, these are motivations to do the work and not the work itself.
I realize that I now sound like a broken record, but if we fail to separate these motivations from the work, we fail to be objective. “Righteous indignation” may lead to action, but it does not always lead to positive actions when it clouds our judgment. How do we keep it from clouding our judgment? By separating it from the work. Cool heads prevail; hot heads make mistakes.
Good intentions have motivated people to do all sorts of things. Outcomes from the actions we take with good intentions are just like those we take when our intentions are not so good: they vary from great to devastating. Take, for example, the well-intentioned “Self-Esteem Movement”, an effort to increase academic performance, reduce bullying, and create a long list of other benefits. With the best of intentions and motivated by values that I believe most of us share, educators, parents, and psychologists plowed forward with programs and policies which are still very alive and well today. These policies have done irreparable harm to our children and society in general because they achieve the opposite of the goals they set out to acheive.
Contrary to popular belief, children do not need high self-esteem in order to succeed. In fact, efforts to raise self-esteem are extremely counter-productive and harmful because they tend to increase not self-esteem, but narcissism. These efforts are particularly harmful when enacted as part of a bully prevention program. The Freudian idea that bullies are compensating for low self-esteem is not only myth, but the opposite is true. Bullies are narcissistic and entitled. Attempting to raise their self-esteem makes the problem worse, not better. Recent reviews of the literature lead to clear conclusions: narcissists often respond to criticism and rejection with aggression. They do this because they are incapable of understanding the point of view of another and, therefore, helpless to change it. Like a toddler with no negotiation skills, they throw a tantrum.
Most laypersons adopt similar views of criminals and others with anti-social behaviors. It feels better to think of people who do bad things as “broken”. Not only does it allow us to think that people can never be inherently bad, but it gives us a sense of control. If we can just “fix” them, they’ll be good, or if we can stop the cycle of abuse… right?
The use of pop-pedagogy is another example of good intentions and values getting in the way of reason. If you doubt that pseudoscience in education is a serious problem, attend a back-to-school night or just visit some education websites and count the number of references to “Learning Styles”, “Multiple Intelligences”, “Emotional Intelligence”, or “Bloom’s Taxonomy”. Then visit the education department of any university and discover why. Instead of teaching from the academic literature, they are teaching from textbooks with content drawn from popular press. Teachers adopt these ideas because they seem right and they address good values – the idea that every child is equally intelligent, just in different areas, the idea that all children are capable of learning everything that every other child can learn; they just learn “differently”. Experiences easily reinforce the ideas through the confirmation bias. (Caveat: “Bloom’s Taxonomy” is supported, but it is descriptive. The suggestion that drawing from all levels of taxonomy in teaching or assessment is unsupported.)
When we allow our good intentions to pave the road, it doesn’t lead to truth. Yes, we should be motivated by our values. We should consider our values when setting general goals. However, in order to reach the goals we claim to care about, in order to achieve the things we claim to value, we must separate those values from the work. We must not allow those values to enter into our decision-making processes.
In an effort to get to the bottom line in under 4,500 words, I’ll end with another quote from D.J. Grothe and a new, more direct reply:
I argue that the work of skepticism should not be divorced from our ethical imperative or “righteous indignation” to mitigate the harm that undue credulity causes. I don’t think you’re saying this.
Well, yes, actually (if you replace “divorced” with “separated”) I think I am.