This article is one in a series. For the other articles, please see: INDEX: “Why You Can’t Trust Government Science”
On One Hand: Trustworthy Science
- Is published,
- Is peer-reviewed,
- Discloses conflicts of interest (or even better has none), and
- Makes public all the data and other information to allow its results to be verified and reproduced by third-parties.
For more details, see: How To Separate Good Science From Sketchy Tales.
On The Other Hand: Corporate & Federal Regulatory Science
Most scientific studies submitted to federal regulatory agencies are:
- Not published,
- Not peer reviewed,
- Not transparent,
- Not disclosed (trade secrets),
- Not reproducible by independent scientists,
- Conducted by corporations and labs with glaring conflicts of interest.
Ironically, government regulators base their safety decisions on those un-trustworthy studies: “science” which has no safeguards, no oversight against fraud or incompetence,and are conducted by people who have huge financial incentives to hide dangers so that chemicals can be approved.
Respectable Scientists Don’t Trust Government Science
Respectable scientists do not trust government science which is mostly funded and produced by corporations and private science labs..
So it’s not surprising that the general public should also be skeptical:Americans Trust Scientists, Mistrust Government, Corporations.
Why Does Federal Regulatory Science Deserve To Be Mistrusted?
Unknown to the general public is the fact that government regulatory and safety assessments are based on private, secret studies funded by corporations seeking regulatory approval.
For more on that please see:
Those regulatory studies have no oversight, no transparency and no controls over quality or integrity.
According to a landmark legal review of federal regulatory science created by the Boston University School of Law (Equal Treatment for Regulatory Science):
“Regulated parties who sponsor research that informs regulation of their products or activities have incentives to influence the research in ways that ensure favorable outcomes.
“Yet since research design and reporting is inherently layered with discretionary judgments that are difficult to discern without replicating the research directly, systemic biases in these judgments are difficult to detect from the outside.
“As long as sponsors control the research at some or all points in the research process, adverse results can be suppressed and the design and reporting of experiments can be biased in ways that produce results that support the sponsor’s interests, rather than offer a disinterested examination of potential harms.” [emphasis added]
“Despite their rather obvious points of convergence, these two sets of concerns have remained separate over the past decade. Worrisome evidence of compromised private research is effectively ignored as the “sound science” reforms take aim primarily at publicly funded research.
As a result, oversight of the quality of regulatory science is growing increasingly bimodal: public research is subject to increased scrutiny, while private research remains largely insulated from outside review and meaningful agency oversight.” [emphasis added]
“Just Trust Me:” Science With Conflicts Of Interest Built In
The general public is unaware that most studies done for safety and regulatory purposes are done by private labs that are paid by the same corporations seeking to have products and chemicals approved by government bureaucrats.
Further, most of those studies are secret, classified as “Confidential Business Information,” and beyond the review of scientists or the general public who would like to know if they were honestly and competently done.
Your safety, then, lies in a corporation’s “just trust me” ability to police itself.
But “just trust me” goes only so far when millions — or billions — of dollars are at stake. Corporations need to be sure that they can produce studies that fit their marketing and regulatory goals.
The private laboratories that perform these studies understand that — if they are to continue to receive lucrative contracts from corporate customers — they need to make sure the “scientific” conclusions meet business requirements.
From: Equal Treatment for Regulatory Science
“Public health regulators make life and death decisions when they promulgate standards to protect the public health. If the research they rely upon to make these decisions is compromised, then there may be more losses, perhaps substantially more, than the regulators or the public onlookers are willing to tolerate.
An accumulating body of evidence suggests that some of the private science that forms the primary, and sometimes the exclusive, input for regulatory decisions regarding public health and safety lacks important scientific safeguards that could result in research that underreports harms to health and the environment.[emphasis added]
“There is growing evidence that it [private corporate lab science] can be compromised in ways that might under-report or even suppress evidence of harm. Sponsors face strong incentives to design and report research in ways most favorable to their interests and to suppress adverse results provided they can do so without detection.[emphasis added]
“In the past, more than a few products or pollutants have been left effectively unregulated because the manufacturer or polluter concealed evidence of the true harm or obscured adverse results.
Dishonest (But Legal) “Science” Can Twist Studies To Fit The Desired Results
Dishonest science looks at the desired outcomes of an experiment, then does its best to skew the laboratory methods to produce a conclusion that’s paid for.
Omissions and biases are never challenged in the corporate studies that underpin regulatory decisions. This is because they are unavailable for review and oversight by the public and independent scientists. If those studies were published in a respectable, peer-reviewed journal any sleight of hand can be exposed.
The Boston School of Law study ( Equal Treatment for Regulatory Science) reports that,
“In the design of the research, there are often choices to be made by the researcher about:
- test subjects,
- laboratory conditions,
- lengths of time of the study, and
- what types of observations to report, even for rigidly specified protocols.
“In conducting laboratory tests on the toxicity of a substance, for example, researchers might focus exclusively on recording the tumors (if the experiment is designed to test for cancer) and will not even record or take written notice of other types of surprise adverse reactions that occur in the course of the study.”
“In a self- designed study of the effects of pesticides on birds, for example, the researcher might make decisions about which effects to notice and record in the data log, and then later, which effects to statistically analyze.
“If each of these incremental discretionary decisions is made in a way most favorable to the sponsor, the results can ultimately tend toward one side of the results spectrum.
“Similarly, decisions about how to report effects in a study can be affected by a researchers’ predisposition towards the outcome.
“Some adverse effects can be downplayed or explained away in the written findings, while the positive outcomes of the study can be overemphasized.
“In one study of 192 random clinical trials conducted on prospective drugs, for example, the researchers found that the written reports of the research did not adequately describe the adverse effects of the drugs under study or explain why a patient stopped taking the drug.”
Outright Fraud Rare (As Far As Anyone Can Tell)
Falsification, according to the Boston Law Review article, is uncommon, in part, because,
“Sponsors can also design or report regulation-relevant research in ways that are favorable to their interests, but fall short of being clearly fraudulent or dishonest.”
The sections below detail just how easy is is to be dishonestly legal.
While rare, it should be noted that when the need is there, corporations will fabricate and falsify to make sure they can keep on selling their products. Catching companies in the act is also rare, but the following story, based on evidence filed in a federal court case, show each step along the way to scientific fraud: Predictable Outcomes: How-To Skew Experiment Design To Produce Predictable Results
Comments are closed.