Atlantic Institute for Policy Research Highlights
Economic policy commentary and research from the AIPR

Evidence-informed policy is not that easy

Author: Herb Emery

Posted on Feb 11, 2020

Category: Government , Social Policy

Is public input displacing science?

Next month, the Standing Committee on Climate Change and Environmental Stewardship, an “all party” committee of the Legislative Assembly, will hold public hearings “seeking input on the use of pesticides, herbicides, including glyphosate, in the province”. The committee will make non-binding recommendations “to inform lawmakers on how to regulate herbicide use”.

When I lived in Calgary, political hearings like this influenced Calgary’s decision to stop putting fluoride in the city’s water supply. The purpose of these public consultations—whether we recognize it or not—is to reduce, if not eliminate, the role of evidence and science in public decisions. These consultations democratize decision-making by creating a channel by which to citizens can communicate their own values to politicians.

But do they result in better decisions for the population? That’s a question I want to explore in a panel discussion I’m organizing on evidence-informed policy on February 19 in Fredericton called “Can Science Save New Brunswick”. There are consequences to crowding out science to make room for greater public input. We need to talk more about those consequences.

Everyone’s an expert when nobody knows what an expert is.

Legislative committees seeking public input are interesting because the products, chemicals, or vaccines in question have often been declared “safe for intended use” by regulatory bodies like Health Canada with the capacity and expertise to evaluate scientific evidence with respect to benefits and harms. When Health Canada determines there is evidence of benefit and little evidence of harm, that is generally consistent with an interpretation of “proven safe”.

The decision to ban or allow use of certain chemicals, medicines, devices, practices should simply be a matter of looking at what the scientific evidence indicates, right? But nothing is ever that simple. Even the strongest science can never provide us with 100 percent certainty on anything. At best, it provides a signal of what is effective, safe or harmful. At the end of the day, it is still a lawmakers job to make a decision that will likely still have a cloud of uncertainty around it, no matter what the science signals.

In the past, governments appointed expert panels, commissions, and boards to provide objective and apolitical assessment of facts, evidence and expert opinion on a matter. These structures served as an objective and independent layer between politicians and the lay public. But this approach only works if there is a general understanding about what should be considered good science, what is high quality evidence, and who is fit to be considered an expert.

Science used to be judged on methodology and rigor. Now it’s judged on who funds it.

Once upon a time, we trusted scientists as objective experts who were knowledgeable about their subject area. Industry has always employed a lot of scientists and funded a lot of research but this wasn’t a problem because we trusted scientific methodologies and the professional conduct and ethics of scientists themselves.

Today, public trust in scientists and the studies they produce appears to depend on who they work for and who funds their research. Instead of the methodology , the rigor of the study or the size of the sample, the quality which most determines whether science is ”good science” today seems to be the subjective trustworthiness of the scientist conducting the study.

The lost science of the evidence pyramid

Those in research or scientific fields at least have the benefit of a well-accepted principles for ranking of the weight of evidence based on quality and risk of bias. The ‘evidence pyramid’ helps sort the value of science from the weakest (expert opinion and editorials) to the strongest (randomized trials and/or systematic reviews) with some objectivity.

The type of research I do—termed ‘observational studies’—falls just above “expert opinion” in its value. These studies are often analyses of survey data or administrative data. Because they lack the structure of a scientific experiment—notably a control group to compare your study group of interest to—these studies are considered to be weak evidence because they can’t tell you what is causing the outcomes of interest. Correlation is not necessarily causation.

One of the weightiest levels of evidence is that of a “randomized clinical trial” where you study an intervention by treating one group of subjects and compare the outcomes to those of similar groups of people who did not receive the intervention. Random assignment determines who gets the treatment versus those who are in the control group. These studies are costly to conduct and are hard to use in a lot of contexts. For example, to prove the value of literacy would you want to randomly assign some children to be prevented from learning how to read?

While the evidence pyramid puts the value of science in perspective, it still does not give us a clear way to label science as ‘good’ or ‘bad’. That makes it easy to argue about what the science says simply based on one’s preference for how, where, and for whom a scientist conducts his/her work or based on one’s preference for what kinds of studies are convincing.

Science can point us in the right direction but the public must be willing to listen.

The biggest challenge with evidence-informed decision making by government is that scientific evidence doesn’t serve to magically align voters and citizens, especially if the science signals a direction that conflicts with citizen values and preferences. The global debate over vaccination of children is a great example of this conflict in action. The science clearly shows that benefits of vaccination far outweigh any risks of harm. But science can never prove that something is 100% risk free.

Some people will not accept any risk of harm regardless of what the evidence signals, even if it appears those risks are small or even negligible. So if a government appears to be using expert assessment or an evidence-based case to impose something on you, or making a change that you fear is harmful, what are your options?

Answer: You use committees, hearings and other public channels to cast doubt on the evidence that you do not like and you question the motives and biases of the scientists who produced the evidence and the bodies tasked with assessing the evidence.

Evidence and science moves from being a basis for policy making, to policy being the basis for evidence making and scientific inquiry. That may serve a political agenda but it’s hard to see how it serves the public good.

Search

Archives

2023

2022

2021

2020

2019


Subscribe

Subscribe to AIPR

* indicates required
Stay connected with the latest research findings from AIPR