In the real world, “bias correction” is the adjustment of a component of an instrument so that its readings of an observed parameter will be more accurate.  This is often clearly understood, for example, by armies of glass electrode pH (geph) meter users, who adjust their meters* to reflect the prevailing temperature, before they take the measurement.   If they were to adjust the meter reading AFTER the measurement was taken, that might be a failed exercise**.

In the real world, a scientific or engineering simulation and/or model is calibrated to actual observations.  This is a type of intrinsic bias correction.  It is naturally only done BEFORE the simulation is performed.   If a simulation has a poor fit to the observations, the simulation results are eliminated from further use.  In other words, when a model produces a poor result, the analyst doesn’t adjust those results to fit the answer.  Rather the analyst will return to the inputs that are fed to the model, and adjust some of them, aiming to produce a more accurate simulation.  There are many limitations.  For example, the analyst cannot feed the model unrealistic inputs.  In any case, after the inputs are adjusted, the simulation is launched again.  If the subsequent simulation result is sufficiently improved in comparison to observations, then the analyst might proceed to exercises such as forecasting into the future with the same model.

Only through that process can an analyst produce plausible and reproducible projections.  Even when a model matches the observation history well, the analyst will typically candidly communicate that the projection will be nonetheless, very uncertain, because after all, the model is only a simplification of the real world.

And again in that real world, if the analyst simply changes the results AFTER the simulation is performed, then this is typically considered to be a failed exercise.  This is analogous for any student to taking a test.  When you hand in your test, you are done.  You don’t get a chance to look up the answers after that point, and then retrieve your test, change your answers, and then return the test.

Also, in the real world, if the analyst changes the actual observations to fit the model, or if the analyst hides or obscures the actual observations to make their model results appear more accurate than they actually are, then these also would typically and ultimately be considered failed exercises.

In the US Bureau of Reclamation (USBR) and in the National Aeronautic and Space Administration (NASA) alternate realities, everything is turned inside out.   As the below figure from source [1] demonstrates, NASA instructs scientists to “bias correct” the already calibrated model.  I interpret this to be an instruction to change the test results after that test was already taken.   The flow chart indicates by my overlay of the dotted green circle outlines that this is done twice.

NASA apparently instructs hydroclimate modelers not to improve their calibration, but rather to simply change their final results to fit the observations.  If we were all allowed this option, we’d all ace every test we ever took.

I suppose at least it is something that NASA discloses that this is done.  In comparison, the USBR guidance on bias-correction [2] is additionally problematic.  Among other statements, they assert that bias corrections are applied to simulations of distant future events.   As I’ve introduced, to correct for bias, one must have observations to correct against.  That is impossible for a distant future projection.

Tunneling down to some specifics, the USBR Western US streamflow simulations are run by the thousands, subjected to the above bias correction and then published without an overlay of the actual observations.  Such a practice denies any peer or other reader the typical feature for immediate skill estimation.  Reference [3] provides an example, which I also explore in this recent post.

The bias correction practices demonstrate in my view a high certainty that the models are invalid. Yet Bureau  models are widely promoted, as if the simulations added statistical confidence to alarming assertions of droughts and climate change.  For my part as a professional forecasting hydrologist, I’ve reached out to many at the Bureau and related institutions.  If I have confirmed anything, it is that “bias correction” practices are the new normal for the US Bureau of Reclamation, NASA, and related organizations that promote alarm over fossil fuel burning.

 

*Typically now this is an automatic step within the meter

** There are exceptions, but only when grounded in a clear disclosure of the history of the meter readings and a comparison to an independent alternate method to understand the observations.  Even then, the need to do that indicates a clear need for a new meter!

[1] Adams, E., 2017.  “Introduction on Bias-Correction Methods” EASTERN AND SOUTHERN AFRICA REGIONAL SCIENCE ASSOCIATE
EARTH SYSTEM SCIENCE CENTER, THE UNIVERSITY OF ALABAMA IN HUNTSVILLE   NASA / SERVIR SCIENCE COORDINATION OFFICE
9 MARCH, 2017   https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20170005507.pdf

[2] US Bureau of Reclamation, 2011.  Technical Memorandum No. 86-68210-2011-01  West-Wide Climate Risk Assessments: Bias-Corrected and Spatially Downscaled Surface Water Projections  https://www.usbr.gov/watersmart/docs/west-wide-climate-risk-assessments.pdf

[3] West-Wide Climate Assessment run number streamflow_riog_usbr_mon_00030.  This run is associated with the US Bureau of Reclamation report “West-Wide Climate Risk Assessments: Bias-Corrected and Spatially Downscaled Surface Water Projections  Technical Memorandum No. 86-68210-2011-01

 

620total visits,2visits today