If you buy into scientism, does that make you a scientist?
While I was on vacation, I missed a post by Sharon Hill on Skeptical Inquirer online. She recently re-shared the piece on Facebook, so I had an opportunity to give it a good read. Sharon’s pieces are usually filled with thoughtful reminders to reign in arrogance and do more than just tolerate other view points, embrace them and learn from them. I highly recommend following her regular columns there or at her blog, Doubtful News.
This recent piece seems to be in response to the current discussion about the limits (and lack thereof) of science, such as this piece by Steven Pinker. However, it lacks the nuance I’ve seen in criticisms of Pinker’s piece.
Hill’s piece seems to define scientism, science, and several other terms somewhat vaguely, oversimplifying the issue and overcomplicating it at the same time. She begins the argument by claiming, if I may use an analogy, that there are many different ways to skin a cat, but then goes on to support that claim by pointing out that there are questions about whether the cat should be skinned, how much the process will cost, and whether the cat has a name. Answering these questions and skinning the cat are different tasks with different goals.
But it is this claim that I take the most issue with:
People who advocate fanatical reliance on science—where all competing methods of gaining knowledge are illegitimate—are practicing scientism.
This definition may very well put me in the category of “practicing scientism”, but it depends on what she means by “illegitimate”. While I recognize that personal knowledge can come from any number of methods and sources, respecting personal knowledge is not a reasonable stance when it comes to enacting policies and making choices which involve other people. To make the best choices, we need to rely on shared knowledge.
And I certainly do believe that empirical methods are necessary to gain genuine, reliable information about the world. In fact, that’s a basic assumption of science (more on that later).
The “just apply science” plan is an overly simplistic solution that not everyone will automatically buy into. There are other, also valid ways of evaluating problems. All the world’s problems cannot be solved by throwing science at it. At least not now (probably never).
This is a confusing statement with twists and turns.
First, whether or not “everyone will automatically buy into” a solution is no measure of the solution’s value.
Second, the statement about evaluating problems conflates the different tasks and goals associated with solving problems. Science, philosophy, and the humanities are different animals. To complicate matters, science incorporates philosophy and the humanities incorporate some scientific thinking. None of these things can tell us what to value, either.
For example, philosophy studies problems; it doesn’t solve them. Philosophy can only provide a way of thinking, not the information that one is to think about. Science, for that matter, doesn’t solve problems, either. It seeks and provides information and explanation. Technology solves problems, but it doesn’t do so just by thinking about them. Technology uses the products of science and scientific thinking (which includes products of philosophy) to solve problems.
So, this seems like a lot of apples and oranges and bananas to me.
The piece also contains more than a few straw men. For example:
For a start, scientism has utility problems. If we need to justify everything with empirical evidence, and then justify that evidence with evidence, and so on, not only do we get bogged down in minutiae, we end up in a scientistic loop which we can’t resolve. There must be a point where we accept a premise as a given – that reality is real, that we aren’t being fooled by a devious creator.
This is not only a straw man, it’s a misleading. Science does accept several premises as givens. In most college-level introductory science textbooks you can find these listed as “canons” or “assumptions”. For example, science assumes that the universe is deterministic, that all events have natural causes. Without this assumption, science can tell us nothing about the world with confidence because anything we observe might be explained by the supernatural.
So in a sense, the argument supports “scientism”.
Hill goes on to admonish over-enthusiasm for science because it “can mask the attention that should be paid to human social issues that are too complex…”, yet her examples are not issues too complex for science, but questions of policy which involve more than just information (e.g., one example involves the ethical question of whether to carry a fetus to full term knowing that it will be born with a debilitating condition). Science informs values, it doesn’t dictate them. However, values can’t answer those questions by themselves any more than science can.
Look at our laws. Many are informed by science (cigarette restrictions, driving after alcohol consumption, environmental regulations) but are tempered by other human interests such as personal pleasures, social norms and economic considerations.
Again, this seems a bit of a straw man. While there are those who claim that science can dictate values (which are embedded in each of those “human interests”), that is not a typical view and does not seem to be the view that Hill is railing against.
Science cannot tell us what we value or what we should value, but without scientifically-derived information and thought processes, we will fail to make choices and policies which promote those values.
Here is an example from my recent talks at TAM2013 and Dragon*Con, as covered in What Intelligence Tests Miss by Keith Stanovich:
In a study by Ubel, participants were asked to allocate 100 livers to 200 children who needed transplants. The children were presented in two groups: A and B. As you can imagine, most participants divided the livers equally, giving half to one group and half to another.
However, when the participants were told that the children in group A had an estimated 80% average chance of surviving the surgery, while the children in group B had only an estimated 20% average chance, the allocations varied much more. About one quarter of the participants gave all of the livers to group A, one quarter gave half to A and half to B, and half of the participants distributed the livers in a manner in between these two choices (i.e., one quarter gave 75 of the livers to group A and 25 to group B).
When asked why they gave livers to group B, participants justified their actions by saying things like “needy people deserve transplants, whatever their chance of survival.” This, of course, ignores the real question, which is how to allocate a limited number of livers to save the most lives. It tells us nothing about why the individual chose one child over another.
Participants in another study were given the same task except that the recipients were not grouped. Instead, they listed the recipients individually, ranked by the individual chance of survival. If the justifications were true, we would expect at least 25% of the participants to allocate the livers to every other child, or somewhat randomly down the list. Instead, participants had no problem allocating all of the livers to the top 100 children on the list.
The difference between the answers when the children are grouped and the answers when they are listed individually is called a “Framing Effect”. The way the problem is framed determines how a majority of the participants respond to it.
Now, science can’t tell us what’s “right” in this situation, but it can sure tell us how to meet our goals once we have decided what those goals are.
Let’s assume that our goal is to maximize the number of children who will be saved. Rational thought tells us that, given that goal and the choice of the two groups, we should give all of the livers to group A (science tells us that those are the children with the best chance for survival). The difference between that choice and the equal distribution is an expected 30 dead children.
It should be obvious from this example that considering our values and goals is not enough to make the best choices. We need good information and good thought processes to make the kinds of decisions that allow us to meet our goals.
One more statement that got under my skin:
When we overly indulge our science bias in informing decisions, such as in the realm of policy, the risk of making an unpopular guidance or rule increases.
Wait a minute. Is our goal to put the most popular policies in place or the best policies? For my part, I want policies that are best for society and the individuals within it. I don’t care if they are popular or not.
Science is not perfect or infallible, even when implemented correctly. Our knowledge is incomplete, which means that we will make a lot of mistakes when we take actions based on that limited knowledge. However, it will always beat human judgments in the long run, allowing us to make the best decisions and take the best actions toward our goals.
Works cited:
Stanovich, Keith E. (2009). What intelligence tests miss: The psychology of rational thought. New Haven, CT: Yale University Press.
Ubel, P.A. (2000). Pricing life: Why it’s time for health care rationing. Cambridge, MA: MIT Press.
Actually, beyond your first two links, the best critique of Pinker I’ve seen (and in other pieces, of scientism in general) is by Massimo. In fact, Douthat’s is kind of weak tea, actually.
I admit that I got lazy and Googled when I couldn’t immediately find the critiques that I’d read (some of which were unconvincing, but others were excellent). I’ll replace Douthat’s with Massimo’s. Thanks for the reminder that he wrote one!
Surely it is the belief that science can tell us what to value that can legitimately be called scientism. Of course there is considerable misuse of the term, but that doesn’t invalidate the concept.