Scientific Literacy: people aren’t very good at tests they haven’t revised for
One of the topics which comes up a lot in social studies of science – and which comes up regularly on the skeptical circuit – is the complaint that the public don’t understand science. There are numerous questionnaires which test the public’s knowledge and they get published about once every three months. One which I noticed earlier was this one, looking at relative levels of scientific knowledge between men and women. Another is this, also by Sheril Kirshanbaum. There are countless other examples of what is generally known as scientific literacy.
Many people take the view that the poor numbers of people who know these answers to the questions – in the examples cited, about the relative sizes of subatomic particles, orbital periods of planets, genetics and origins of the universe – shows that members of the public are not adequately educated about science. However, despite extensive efforts by educators, the relative rates of knowledge remain static between surveys.
For me the question really is what we’re measuring, and what the significance of that measurement is. When were people asked and in what social setting? The only time I’ve ever been invited to participate in social research is on the high street when shopping: how many people could cite scientific facts when worrying about getting a hot sausage roll from Gregg’s?
In addition, what relevance to every day lives does knowledge being assessed have? Whilst I knew the answers being sought to the questions above (and I’d love to discuss the question on genetic gender determinism further), I can’t say they form anything other than an interesting backdrop to the day-to-day decisions of my life. So what if it takes the Earth 365.25ish days to orbit the sun, does it affect the housework I’m procrastinating doing by writing this? The relative size of electrons have equally little import, and I’m becoming less and less concerned about how people think the universe was created (it was nearly 14 billion years ago expansion started, wait), provided they don’t think they have the right to force that onto other people. Granted there may be a positive correlation between people who believe in special creation and people who want to force that belief on me, but such people remain a small minority.
And why don’t we expect publics to know facts about philosophy and literature and religions? What is it that means people should definitely know that electrons are even smaller that small, but we don’t care if they know how and why Socrates died? Why does the rotation of our planet excite rage when knowledge of the contributions of Keats, Ives and Spinoza barely raise an interest?
Whilst many members of the public may fail these crude general knowledge tests, we also know that people don’t operate in a scientific vacuum. When they need to know something, they have the resources, either in themselves or in the people around them, to gain understanding. Classic examples are diabetes and cancer: non-specialists can gain a sophisticated understanding of the conditions and their treatment options when required, but might otherwise fail these tests. One of the advantages of the Internet is access to a vast library of human knowledge (which comes hand in hand with access to a vast library of human ignorance, of course).
Demanding that people have these pieces of knowledge may also lead us to underestimate the knowledge they do have. As a classic example, during the BSE crisis, failure to into account the knowledge of butchers about the preparation of meat meant that the first set of rules designed to prevent BSE entering the food chain failed to work, something which was only discovered by the policy makers 5 years after the regulations came into force. (I am assured by my master butcher brother-in-law that the first regulations were laughable, showing a compete ignorance about what happens in an abattoir.) People build complex knowledge structures of the world they live in. Sometimes, these knowledge structures are implicitly scientific but if you asked the people they wouldn’t say they had any scientific knowledge.
One of the groups I have an interest in, ambulance crews, have a vast body of knowledge about humans, their bodies and their responses to illness and injury, but not many of them would necessarily say they are applied scientists. If such a person is unable to answer questions on subatomic particles, should we discard the complex, subtle and deep knowledge they have gained over years of practice and declare them unscientific?
Even if we agree that pop science quizzes have some kind of intrinsic value (and that’s as much a question for the philosophers!), what implications does it have for society? The common assertion is that the scientifically illiterate cause harm because of poor quality decisions. For example, if someone doesn’t know how dilution works, they may mistakenly think that homeopathy works and fail to get vaccinated. I think it’s worth pointing out at this point that there’s a clear difference between science as a body of facts and science as a process. For example, a physicist and biologist probably have a good understanding of how science works, but probably don’t compare in terms of factual knowledge about each other’s fields.
Indeed, the evidence on science-as-knowledge doesn’t really support the position that ‘ignorant’ people make ‘bad’ decisions. In fact, the evidence suggests that the more science (both types) people know, the more ambivalent they are about science. Thus the people most likely to reject genetically modified foods and vaccination are also the people who would be most likely to pass the knowledge assessment. Indeed, anti-GM often have depths of knowledge which are comparable to those of practitioners in the field.
Indeed, I’m not even sure how a more scientifically literate world would differ from the world we see today. This is particularly true when we take into account evidence about motivated reasoning: people with pre-existing opinions are less likely to change their minds when confronted with evidence which strongly conflicts with their existing beliefs.
Now, I’m all for ensuring that science is made accessible to different publics, that publics have opportunities to engage with science if they wish and that our education system is as good as it can possibly be. Wrong information, particularly when it’s associated with exploitation, can and should be countered, if necessary by reference to appropriate enforcement bodies (the ongoing ASA campaigns, for example). However, we can’t use these measurements as indicators as evidence of anything other than what it measures. And the only conclusion that I can draw is that people who are given a science test without a chance to revise tend not to do very well.
But any undergraduate could have told you that though.
[Edit: Sheril pointed out that both of the posts I referenced were actually authored by her, so I corrected the text]