Understanding climate change data and how climate models work is not a simple thing. Before diving into data points and models, it is important to establish a framework.
First, weather and climate are not the same thing. If it’s raining where you are at any point in time it is safe to assume that it will not rain forever, and that it was probably not raining at some point in the past. The rainy episode is a single weather event that may or may not fit neatly into a long-term climate trend, and it may not follow a short-term regional trend.
Second, environment and climate are not the same thing. While our species has developed a passion for debating weather and climate trends over the past 10,000 years, no one is questioning whether pollution is bad, or that water, air and soil should be kept as clean as possible so that our species may thrive without developing strange illnesses that turn us into zombies. Or worse.
This leads us to a third point. CO2 and pollution are not the same thing. Pollution is generally defined as toxins in an environment that negatively impacts the well being of life that depends on that environment. By contrast, CO2 is a trace gas that is required for life on earth to survive. Through the process of photosynthesis, plants consume CO2 and convert it into the very oxygen humans need to breath. The reason CO2 is categorized among “greenhouse gases” is because when there are higher levels of CO2 in our atmosphere, plant life thrives and the earth turns green. A greener earth is a healthier earth that benefits organisms at every level of an abundant food chain. Cars and industries have the ability to limit pollution while still producing life-giving CO2, and at 400 parts per million CO2 is at historic planetary lows.
Finally, scientists are not experts in everything. Most accredited scientists have a good or exceptional understanding of their particular field of study, and less expertise on subjects they have not studied or experienced firsthand. This brings us to one of the issues driving the climate change debate.
There’s a famous metaphor about 4 blind men who are asked to describe an elephant. One says, “an elephant is like a giant leaf that sways in the wind.” The next claims, “Surely you’re mistaken. An elephant is strong and flexible like a giant snake that can wrap around your whole body.” The third shares that “an elephant is like a thin rope,” while the last states that “an elephant is big and round like a tree trunk that cannot be lifted or moved without much coaxing.”
All four of the blind men are technically correct, and yet none have gained enough expert knowledge to accurately predict the elephant’s behavior. None can fully describe the elephant without comparing notes and reaching consensus with the others. And the metaphor says nothing of politics or other motivations. If it did then the story would tell how only the man who believes the elephant is a giant leaf receives funding to continue his research, and the other three men are unable to get their studies published.
Unfortunately, climate science is much like the elephant metaphor. There are solar experts, environmental experts, meteorologists, climatologists, physiologists, astrophysicists, geologists, volcanologists, oceanographers, marine biologists, and more all studying distinctly different fields that are related to the climate in some way. Each has their own motivations for studying their primary subject, and their own unique projects that they’re working on – some funded by private enterprise and others funded by tax dollars. The most vocal and well recognized people propelling the climate change debate have expertise in only one, or none of these fields.
Now that we have some context, let’s look at the reasons climate change data is worse than you think.
Anyone who has studied statistics understands the basics of defining statistical significance for a data set. The levers in the formula are sample size and desired confidence level, and with this you can draw conclusions from your data set and state your expected margin of error (eg. +/-5%). This can be applied to individual variables within a data set, or the data set as a whole.
Using the climate as an example, there are dozens of variables that may influence climate change including but not limited to: solar cycles, solar flares, lunar orbit, earth orbit, earth axis, cloud formation, water vapor, jet stream, ocean currents, ocean temperature, volcanic activity, thermodynamics, cosmic radiation, asteroid impacts, and of course the mix of various gases in the atmosphere. The best climate models being run by climatologists today consider only a few of these variables, and each variable used has an estimated margin of error, and margin of error across variables has a cumulative effect on the model. No one has yet to build a complete earth simulation model, and this is why climate predictions have been so dramatically wrong to date.
Scientists in every field are required to manage issues with sampling, quality and gaps in data sets, and this is true in climatology as well. For example, sampling Earth’s air, ocean, and land surface temperatures has been limited and evolving. The first weather satellites were put into orbit in the 1950’s and were only designed to take pictures. Until recently, ocean temperature data was voluntarily collected and reported by ship crew on trade routes, and methods for collecting this data has not been consistent throughout the ages. A sailor collecting an ocean sample by bucket might get a reading that is different from that of a captain extracting a sample from a bilge pump that had been heated slightly by a cargo ship’s engine. In order to standardize regional air temperature data, meteorologists focused on measurements taken at airports. Many airports started recording data when the surrounding area was soil and vegetation. Over time the footprint of the airport may have changed, while vegetation was covered by concrete, and eventually covered again by asphalt. So even though the official data collection point remained consistent, other changes might have taken place that have influenced the data over time. Multiply this by thousands of ships, thousands of airports, and the addition of satellites and other evolving technologies, and one might conclude that the data is a mess.
Let’s assume that the same challenges and inconsistencies with sampling air and ocean temperature also applies to things like land surface temperature data, sun intensity and sunspot data, cloud formation data, volcanic activity, and other variables that may influence climate. The way scientists typically reconcile variations in data collection is by normalizing data – making assumptions to fill gaps, throwing out the data that make up the tails in a bell curve, disregarding anomalies. Normalization can help models to run more predictably, however normalization methods can bias the data based on an individual’s experience, training, perspective, and motivation.
Climate scientists generally agree that the mean surface temperature of the earth has increased by 0.8C over the past 100 years. Given issues with data collection, quality, gaps, and normalization, it’s conceivable that the margin of error on this output is greater than +/-1.0C. After considering expected margin of error it would then be reasonable to say the earth has warmed by twice as much as we thought, or perhaps not at all.
Even if the scientific community achieved absolute consensus on global warming — which it has not, because no serious scientist would claim consensus on anything and quit their studies — climate models are still very far from predicting future climate with any level of accuracy and reliability.
Today, regional 10-day weather forecasts reported by meteorologists have an accuracy rate that is no greater than random chance. If we cannot accurately predict regional weather beyond a 3-day outlook, how can we reasonably expect to forecast 10, 20, or 100 years of climate change?