Climate attribution method overstates “fingerprints” of external forcing

From Climate Etc.

by Ross McKitrick

I have a new paper in the peer-reviewed journal Environmetrics discussing biases in the “optimal fingerprinting” method which climate scientists use to attribute climatic changes to greenhouse gas emissions. This is the third in my series of papers on flaws in standard fingerprinting methods: blog posts on the first two are here and here.

Climatologists use a statistical technique called Total Least Squares (TLS), also called orthogonal regression, in their fingerprinting models to fix a problem in ordinary regression methods that can lead to the influence of external forcings being understated. My new paper argues that in typical fingerprinting settings TLS overcorrects and imparts large upward biases, thus overstating the impact of GHG forcing.

While the topic touches on climatology, for the most part the details involve regression methods which is what empirical economists like me are trained to do. I teach regression in my econometrics courses and I have studied and used it all my career. I mention this because if anyone objects that I’m not a “climate scientist” my response is: you’re right, I’m an economist which is why I’m qualified to talk about this.

I have previously shown that when the optimal fingerprinting regression is misspecified by leaving out explanatory variables that should be in it, TLS is biased upwards (other authors have also proven this theoretically). In that study I noted that when anthropogenic and natural forcings (ANTH and NAT) are negatively correlated the positive TLS bias increases. My new paper focuses just on this issue since, in practice, climate model-generated ANTH and NAT forcing series are negatively correlated. I show that in this case, even if no explanatory variables have been omitted from the regression, TLS estimates of forcing coefficients are usually too large. Among other things, since TLS-estimated coefficients are plugged into carbon budget models, this will result in a carbon budget being biased too small.


In 1999 climatologists Myles Allen and Simon Tett published a paper in Climate Dynamics in which they proposed a Generalized Least Squares or GLS regression model for detecting the effects of forcings on climate. The IPCC immediately embraced the Allen&Tett method and in the 2001 3rd Assessment Report hailed it as the way to show a causal link between greenhouse forcing and observed climate change. It’s been relied upon ever since by the “fingerprinting” community and the IPCC. In 2021 I published a Comment in Climate Dynamics showing that the Allen & Tett method has theoretical flaws and that the arguments supporting its claim to be a valid method were false. I provided a non-technical explainer through the Global Warming Policy Foundation website. Myles Allen made a brief reply, to which I responded and then economist Richard Tol provided further comments. The exchange is at the GWPF website. My comment was published by Climate Dynamics in summer 2021, has been accessed over 21,000 times and its Altmetric score remains in the top 1% of all scientific articles published since that date. Two and a half years later Allen and Tett have yet to submit a reply.

Note: I just saw that a paper by Chinese statisticians Hanyue Chen et al. partially responding to my critique was published by Climate Dynamics. This is weird. In fall 2021 Chen et al submitted the paper to Climate Dynamics and I was asked to provide one of the referee reports, which I did. The paper was rejected. Now it’s been published even though the handling editor confirmed it was rejected. I’ve queried Climate Dynamics to find out what’s going on and they are investigating.

One of the arguments against my critique was that the Allen and Tett paper had been superseded by Allen and Stott 2001. While that paper incorporated the same incorrect theory from Allen and Tett 1999, its refinement was to replace the GLS regression step with TLS as a solution to the problem that the climate model-generated ANTH and NAT “signals” are noisy estimates of the unobservable true signals. In a regression model if your explanatory variables have random errors in them, GLS yields coefficient estimates that tend to be biased low.

This problem is well-known in econometrics. Long before Allen and Stott 2001, econometricians had shown that a method called Instrumental Variables (IV) could remedy it and yield unbiased and consistent coefficient estimates. Allen and Stott didn’t mention IV; instead they proposed TLS and the entire climatology field simply followed their lead. But does TLS solve the problem?

No one has been able to prove that it does except under very restrictive assumptions and you can’t be sure if they hold or not. If they don’t hold, then TLS generates unreliable results, which is why researchers in other fields don’t like it. The problem is that TLS requires more information than the data set contains. This requires the researcher to make arbitrary assumptions to reduce the number of parameters needing to be estimated. The most common assumption is that the error variances are the same on the dependent and explanatory variables alike.

The typical application involves regressing a dependent “Y” variable on a bunch of explanatory “X” variables, and in the errors-in-variables case we assume the latter are unavailable. Instead we observe “W’s” which are noisy approximations to the X’s. Suppose we assume the variances of the errors on the X’s are all the same and equal S times the variance of the errors on the Y variable. If this turns out to be true, so S=1, and we happen to assume S=1, TLS can in some circumstances yield unbiased coefficients. But in general we don’t know if S=1, and if it doesn’t, TLS can go completely astray.

In the limited literature discussing properties of TLS estimators it is usually assumed that the explanatory variables are uncorrelated. As part of my work on the fingerprinting method I obtained a set of model-generated climate signals from CMIP5 models and I noticed that the ANTH and NAT signals are always negatively correlated (the average correlation coefficient is -0.6). I also noticed that the signals don’t have the same variances (which is a separate issue from the error terms not having the same variances).

The experiment

In my new paper I set up an artificial fingerprinting experiment in which I know the correct answer in advance and I can vary several parameters which affect the outcome: the error variance ratio S; the correlation between the W’s; and the relative variances of the X’s. I ran repeated experiments based in turn on the assumption that the true value of beta (the coefficient connecting GHG’s to observed climate change) is 0 or 1. Then I measured the biases that arise when using TLS and GLS (GLS in this case is equivalent to OLS, or ordinary least squares).

These graphs show the coefficient biases using OLS when the experiment is run on simulated X’s with average relative variances (see the paper for versions where the relative variances are lower or higher).

The left panel is the case when the true value of beta = 0 (which implies no influence of GHGs on climate) and the right is the case when true beta=1 (which implies the GHG influence is “detected” and the climate models are consistent with observations). The lines aren’t the same length because not all parameter combinations are theoretically possible. The horizontal axis measures the correlation between the observed signals, which in the data I’ve seen is always less than -0.2. The vertical axis measures the bias in the fingerprinting coefficient estimate. The colour coding refers to the assumed value of S. Blue is S=0, which is the situation in which the X’s are measured without error so OLS is unbiased, which is why the blue line tracks the horizontal (zero bias) axis. From black to grey corresponds to S rising from 0 to just under 1, and red corresponds to S=1. Yellow and green correspond to S >1.

As you can see, if true beta=0, OLS is unbiased; but if beta = 1 or any other positive value, OLS is biased downward as expected. However the bias goes to zero as S goes to 0. In practice, you can shrink S by using averages of multiple ensemble runs.

Here are the biases for TLS in the same experiments:

There are some notable differences. First, the biases are usually large and positive, and they don’t necessarily go away even if S=0 (or S=1). If the true value of beta =1, then there are cases in which the TLS coefficient is unbiased. But how would you know if you are in that situation? You’d need to know what S is, and what the true value of beta is. But of course you don’t (if you did, you wouldn’t need to run the regression!)

What this means is that if an optimal fingerprinting regression yields a large positive coefficient on the ANTH signal this might mean GHG’s affect the climate, or it might mean that they don’t (the true value of beta=0) and TLS is simply biased. The researcher cannot tell which is the case just by looking at the regression results. In the paper I explain some diagnostics that help indicate if TLS can be used, but ultimately relying on TLS requires assuming you are in a situation in which TLS is reliable.

The results are particularly interesting when the true value of beta=0. A fingerprinting, or “signal detection” test starts by assuming beta=0 then constructing a t-statistic using the estimated coefficients. OLS and GLS are fine for this since if beta=0 the coefficient estimates are unbiased. But if beta=0 a t-statistic constructed using the TLS coefficient can be severely biased. The only cases in which TLS is reliably unbiased occur when beta is not zero. But you can’t run a test of beta=0 that depends on the assumption that beta is not zero. Any such test is spurious and meaningless.

Which means that the past 20 years worth of “signal detection” claims are likely meaningless unless steps were taken in the original articles to prove the suitability of TLS or verify its results with another nonbiased estimator.

I was unsuccessful in getting this paper published in the two climate science journals to which I submitted it. In both cases the point on which the paper was rejected was a (climatologist) referee insisting S is known in fingerprinting applications and always equals 1/root(n) where n is the number of runs in an ensemble mean. But S only takes that value if, for each ensemble member, S is assumed to equal 1. One reviewer conceded the possibility that S might be unknown but pointed out that it’s long been known TLS is unreliable in that case and I haven’t provided a solution to the problem.

In my submission to Environmetrics I provided the referee comments that had led to its rejection in climate journals and explained how I expanded the text to state why it is not appropriate to assume S=1. I also asked that at least one reviewer be a statistician, and as it turned out both were. One of them, after noting that statisticians and econometricians don’t like TLS, added:

“it seems to me that the target audience of the paper are practitioners using TLS quite acritically for climatological applications. How large is this community and how influential are conclusions drawn on the basis of TLS, say in the scientific debate concerning attribution?”

In my reply I did my best to explain its influence on the climatology field. I didn’t add, but could have, that 20 years’ worth of applications of TLS are ultimately what brought 100,000 bigwigs to Dubai for COP28 to demand the phaseout of the world’s best energy sources based on estimates of the role of anthropogenic forcings on the climate that are likely heavily overstated. Based on the political impact and economic consequences of its application, TLS is one of the most influential statistical methodologies in the world, despite experts viewing it as highly unreliable compared to readily available alternatives like IV.

Another reviewer said:

“TLS seems to generate always poor performances compared to the OLS. Nonetheless, TLS seems to be the ‘standard’ in fingerprint applications… why is the TLS so popular in physics-related applications?”

Good question! My guess is because it keeps generating answers that climatologists like and they have no incentive to come to terms with its weaknesses. But you don’t have to step far outside climatology to find genuine bewilderment that people use it instead of IV.


For more than 20 years climate scientists—virtually alone among scientific disciplines—have used TLS to estimate anthropogenic GHG signal coefficients despite its tendency to be unreliable unless some strong assumptions hold that in practice are unlikely to be true. Under conditions which easily arise in optimal fingerprinting, TLS yields estimates with large positive biases. Thus any study that has used TLS for optimal fingerprinting without verifying that it is appropriate in the specific data context has likely overstated the result.

In my paper I discuss how a researcher might go about trying to figure out whether TLS is justified in a specific application, but it’s not always possible. In many cases it would be better to use OLS even though it’s known to be biased downward. The problem is that TLS typically has even bigger biases in the opposite direction and there is no sure way of knowing how bad they are. These biases carry over to the topic of “carbon budgets” which are now being cited by courts in climate litigation including here in Canada. TLS-derived signal coefficients yield systematically underestimated carbon budgets.

The IV estimation method has been known at least since the 1960s to be asymptotically unbiased in the errors-in-variables case, yet climatologists don’t use it. So the predictable next question is why haven’t I done a fingerprinting regression using IV methods? I have, but it will be a while before I get the results written up and in the meantime the technique is widely known so anyone who wants to can try it and see what happens.

5 13 votes
Article Rating
Notify of
Newest Most Voted
Inline Feedbacks
View all comments
Tom Halla
December 19, 2023 6:07 am

I really think McKittrick will not get a fair hearing, as taking down His Holiness, Michael Mann was an intolerable act of saying the quiet part out loud. How dare anyone say the Emperor is naked!

Reply to  Tom Halla
December 19, 2023 6:28 am

Thank you. I now can’t get the image of a naked Michael Mann out of my head!

Andy Pattullo
Reply to  Tom Halla
December 19, 2023 9:14 am

Probably true but we don’t live in a world of fair hearings. We live in a world of paid propaganda. Ross’ efforts help to arm us all with he knowledge we are on the right side of truth and I suspect will recharge many honest people’s efforts to steer policy back to sanity.

Jim Karlock
Reply to  Andy Pattullo
December 19, 2023 8:05 pm

Paid propaganda like this:
 “The green movement exists almost only because of support from a small number of philanthropic foundations,” he notes. Grants from fewer than 10 foundations account for well in excess of $1 billion of climate grant-making per year, he adds.”
Elites bankrolling group that supports climate criminals

Just Stop Oil Donor Received £110 Million in Green Subsidies from Taxpayer

“They are pouring money into those efforts, as the German journalist Axel Bojanowski pointed out, to a degree that would make the oil lobby blush. At the “Climate Action Summit” in 2018, two dozen billionaire-backed foundations pledged 4 billion dollars for climate-change lobbying. Some of them, like the Hewlett Foundation, are directly funding journalists at the Associated Press for “climate reporting,” while foundations associated with the Packard and Rockefeller families have been backing the journalistic endeavor “Covering Climate Now,” which “collaborates with journalists and newsrooms to produce more informed and urgent climate stories” and is financing hundreds of media outlets.”  From:


Philanthropists now have extraordinary influence in global agencies, and their interests align academic research and NGOs of all kinds. Since 1999, The Bill and Melinda Gates Founda on has made grants exceeding $82 billion, including $4.7 billion granted to the World Health Organiza on. Grants from the foundaon worth $3.4 billion have been made to organizations based in the UK, including more than $2 billion to UK universi es, and more than $300 million to Imperial College London. A further $83 million has been granted to UK news media organisaons, and more than $57 million to UK think tanks

AND Fall for Paid News

AP will hire about 20 journalists based in Africa, Brazil, India and the U.S. to supplement the news agency’s journalists already covering climate and the environment. Together the team will transform how AP covers the climate story, including focusing on the profound and varied impacts of climate change on society in areas such as food, agriculture, migration, housing and urban planning, disaster response, the economy and culture.
Read the rest at:

More at:

Andy Pattullo
Reply to  Jim Karlock
December 20, 2023 9:31 am

Yes, this is our battlefield. The eco lunatics enter the fray with wads of cash, immense ego and a total lack of insight. But the truth is entirely on the other side and the truth eventually floats to the surface while lies and deceit tread water only a short time before they drown under the weight of ignorance.

December 19, 2023 6:27 am

Unbiased….. that will never do.

Curious George
Reply to  strativarius
December 19, 2023 8:12 am

Climate scientists misuse statistics. No surprise. Thank you, Ross.

J Boles
December 19, 2023 7:12 am
Richard Page
Reply to  J Boles
December 19, 2023 8:04 am

I know they’ve gone after the big Christmas trees, didn’t know about gifts.

Tom Abbott
December 19, 2023 7:33 am

From the article: “Which means that the past 20 years worth of “signal detection” claims are likely meaningless”

This will make the climate alarmists very unhappy. They are putting so much effort trying to tie CO2 with extreme weather events, and along comes a guy who tells them all their computer games are not giving reliable results.

From the article: “I also asked that at least one reviewer be a statistician, and as it turned out both were. One of them, after noting that statisticians and econometricians don’t like TLS, added:

“it seems to me that the target audience of the paper are practitioners using TLS quite acritically for climatological applications. How large is this community and how influential are conclusions drawn on the basis of TLS, say in the scientific debate concerning attribution?”

Yes, quite acritically.

The conclusions are destroying the economies and societies of the Western Democracies, so are very influential in the wrong circles.

The Human-caused Climate Change narrative is fueled by computer games/distortions such as this.

Richard Page
Reply to  Tom Abbott
December 19, 2023 8:08 am

Fredi won’t like that, that’s her whole job just gone bust.

Dave Andrews
Reply to  Richard Page
December 19, 2023 9:59 am

It’s alright, she and her colleague weather attribution mates won’t ever read anything by Ross.

Reply to  Dave Andrews
December 19, 2023 7:53 pm

Are you sure she can read?

1 million monkeys typing… Doesn’t mean any of the monkeys can read.

Andy Pattullo
Reply to  Tom Abbott
December 19, 2023 9:18 am

“The conclusions are destroying the economies and societies of the Western Democracies, so are very influential in the wrong circles.”

Yes but this is a feature, not a flaw in the climate propaganda movement. Not to worry, the proponents of this madness will all be revealed as charlatans and pickpockets when the pain of compliance becomes obvious to all. That’s when the pitchforks and torches come out. Academics will become a very different place when qualifications and integrity become necessary attributes again.

Reply to  Tom Abbott
December 19, 2023 7:51 pm

along comes a guy who tells them all their computer games are not giving reliable results.”

Skeptics have told them for over 14 years their models aren’t worth one of the electrons used to run the models.

You know, positive criticisms like “climate models are pure junk!”, and “Not 1 climate model is accurate!”.

Tom Abbott
Reply to  ATheoK
December 20, 2023 2:31 am

Yes, we’ve been trying to tell them for years. They still aren’t listening.

December 19, 2023 7:40 am

“”the ‘standard’ in fingerprint applications””

Roll the bones

“”World Weather Attribution (WWA) scientists quantify how climate change influences the intensity and likelihood of an extreme weather event. They often do this using weather data and computer modelling””

Steve Richards
December 19, 2023 7:52 am

Interesting that when a professional statistician joins a team results can change dramatically!

Our own UK mess called CRU at the UAE, was investigated during a rigged enquiry.
Little came out of it, they are all jolly good fellows and all, but the recommendation was that each climate research team had a pro stats person on board.

As you can imagine, that has not happened!

Richard Page
Reply to  Steve Richards
December 19, 2023 8:09 am

Of course it hasn’t happened – nothing would end this scam faster than someone who actually knows what they are doing.

Andy Pattullo
Reply to  Richard Page
December 19, 2023 9:20 am

Yes perhaps they are all ignorant of the truth, but I am suspicious quite a few of them know the truth and find it just too inconvenient for their intentions.

Peta of Newark
December 19, 2023 7:57 am

Yes, Spot on:””tendency to be unreliable unless some strong assumptions hold that in practice are unlikely to be true

The Strong Assumption is that heat energy can flow up a thermal gradient

El Sol: Can force both Earth’s surface and Earth’ atmosphere. It can and does do that because it has higher temperature than both those things

Earth’s Surface: Can not force itself or El Sol but can force the atmosphere. It does do that because it is. on average, warmer than the atmosphere (minus 15°C vs plus 15°C) but colder than El Sol

Earth’s atmosphere: Can not force itself nor can it force either Earth’s surface nor El Sol because it is colder than both those places.

The incorrect strong assumption is that atmosphere does in fact force Earth’s surface. But because Earth surface is the only significant forcer of the atmosphere, this is tantamount to the surface forcing itself
i.e. That Earth surface/atmosphere comprise A Perpetual Motion Machine

Fine. Have it your way. Switch off El Sol and see what happens.

Dennis Gerald Sandberg
Reply to  Peta of Newark
December 19, 2023 4:39 pm

Here’s what Bing, ask me anything says about El Sol:
 if the sun were responsible for global warming, we would expect to see warming in all layers of the atmosphere, from the surface to the upper atmosphere (stratosphere). But what we actually see is warming at the surface and cooling in the stratosphere. This is consistent with warming caused by an accumulation of gases that trap heat near the Earth’s surface, and not because the sun is “getting hotter”
I report you decide.

Jim Karlock
Reply to  Dennis Gerald Sandberg
December 19, 2023 8:22 pm

Try asking Bing for actual evidence that man’s CO2 is causing serious global warming.
The result is proof that Bing does not have a clue about evidence.

Tom Abbott
Reply to  Jim Karlock
December 20, 2023 3:05 am

Every interrogation of AI about the subject of climate change by skeptics demonstates that AI’s don’t know what they are talking about. All AI’s do is regurgitate climate alarmist talking points.

Another good question to ask an AI is do they know the difference between facts and evidence, and speculation, assumptions and unsubstatiated assertions.

I assume the AI will say it knows the difference.

Then ask the AI to go back and find every climate alarmist claim from the beginning of time and place each claim in its approriate category. Is the claim a fact/evidence, or is it speculation, assumptions or unsubstantiated assertions.

I want to see what the AI puts in the “fact/evidence” category. I know for certain that other than the basic greenhouse theory of CO2, the AI will find nothing that qualifies as evidence. I’ve been looking for a long time and have not found any evidence that CO2 is doing anything discerable to the Earth’s atmosphere. The weather now is no different from when I was a kid. In fact, the weather now is milder than when I was a kid, for the most part.

Where’s the evidence, AI?

Tom Abbott
Reply to  Dennis Gerald Sandberg
December 20, 2023 2:55 am

“AI: “But what we actually see is warming at the surface and cooling in the stratosphere. This is consistent with warming caused by an accumulation of gases that trap heat near the Earth’s surface, and not because the sun is “getting hotter”

This is also consistent with the Sun heating the Earth which causes the atmospheric circulation that causes warming in the lower atmosphere and cooling in the upper atmosphere, based on the gases it contains.

When it comes to Artificial Intelligence and Human-caused climate change, the AI’s are just agents of alarmist climate change propagada. They are programmed to promote human-caused climate change and are not programmed to look at it objectively, as should be obvious.

December 19, 2023 8:00 am

This is way beyond any statistics courses I took in engineering, but I take some solace in knowing that one of the main repositories of engineering midterm flunkouts was the environmental science faculty, so I’m pretty sure most self-proclaimed CliSci’s won’t have a hope of calculating or understanding McKitricks beta.

December 19, 2023 9:07 am

Story tip

“”Researchers have simulated a ‘runaway greenhouse effect’ – a dramatic escalation in temperatures on our planet.

Worryingly, they say that Earth could soon be an ‘uninhabitable hell’, much like our neighbouring planet, Venus.””

True to form: With new climate models…

Reply to  strativarius
December 19, 2023 9:21 am

I saw that story, too!
CO2 emissions will cause feedbacks, and tipping points. Models say so! Here’s a link to the study.
– – – – – – – – –

First exploration of the runaway greenhouse transition with a 3D General Circulation Model

While their detections remain challenging at present, observations of small terrestrial planets will become easier in a near future thanks to continuous improvements of detection and characterisation instruments. In this quest, climate modeling is a key step to understanding their characteristics, atmospheric composition, and possible histories. If a surface water reservoir is present on such a terrestrial planet, an increase in insolation may lead to a dramatic positive feedback induced by water evaporation: the runaway greenhouse.

Dave Andrews
Reply to  Cam_S
December 19, 2023 10:09 am

One has to ask why did it not happen when CO2 was 7000ppm or higher in the past?

Reply to  Dave Andrews
December 19, 2023 11:00 am

Before the advent of green plants, CO2 levels were closer to 70% or 700,000ppm.

Joseph Zorzin
Reply to  MarkW
December 19, 2023 11:54 am

were the oceans boiling?

Richard Page
Reply to  Dave Andrews
December 19, 2023 5:00 pm

Because that was ‘good’ CO2, not the ‘bad’ CO2 that comes out of car exhausts and smokestacks.

Smart Rock
December 19, 2023 10:17 am

Dr. McKittrick’s statistics are way beyond my ability to comprehend (I’m probably in good company there!), but this statement, in a carefully understated tone, is not (my bold):

why is the TLS so popular in physics-related applications? Good question! My guess is because it keeps generating answers that climatologists like and they have no incentive to come to terms with its weaknesses

and he also makes an observation that goes to the very heart of what is laughingly referred to as “climate science”:

I noticed that the ANTH and NAT signals are always negatively correlated

ANTH and NAT being the model-generated anthropogenic and natural “signals” (i.e. warming/cooling trends). They HAVE to be negatively correlated because ANTH is greater than the “observed” trend. So their TLS data-mongering produces results that can be summarised in the equation:
OBS = ANTH + NAT (where NAT is a negative quantity). Or, to express it in words:

Observed warming equals (model-generated) anthropogenic warming minus (model-generated) natural cooling. Bingo! QED! We found the human fingerprint! Attribution is so cool! Ain’t science wonderful? Where’s our Nobel Prize?

I’ve commented before on this transparently obvious and scientifically fraudulent use of model output that was created for that specific purpose. I wonder how many trial runs, how much tweaking of their parameterizations, it took to get the results they needed to “prove” the hypothesis.

Climate science is the first truly post-modern science, in which the conclusions are determined in advance to match the approved theories of the scientists and their political masters. And the discipline of “attribution science” is one of its crowning achievements. And if you question any aspect of it, you are denialist trash, the scum of the earth. Heaven help us.

PS, if I have misunderstood what Dr. McKittrick wrote in his post (which is quite likely), I apologise, and I assume someone will correct me.

Rud Istvan
December 19, 2023 10:21 am

It seems the ‘climate science’ community has a very long history of misusing/abusing statistics. I provided a glaring example in my very first post here back in 2011. A ‘famous’ regression attempt to show AGW negative impact on corn yields intentionally omitted (on spurious logical grounds) a key regression term. Yield suffers experimentally when it is hot AND dry, but NOT when it is hot and wet. Omitting a crucial temp x rainfall regression component comprises academic misconduct.

Jim Masterson
Reply to  Rud Istvan
December 19, 2023 9:11 pm

“It seems the ‘climate science’ community has a very long history of misusing/abusing statistics.”

They also have a very long history of misusing/abusing thermodynamics. They average intensive properties such as temperature like it is a standard and correct practice.

December 19, 2023 10:54 am

I find it funny how the same people who whine about non-climate “scientists” talking about climate science, have no problem with non-statisticians playing with complex statistics.

Roy Clark
December 19, 2023 11:07 am

There is a much deeper problem than an incorrect least squares analysis of a global mean climate record. It is impossible for the change in long wave IR (LWIR) flux produced by a so called greenhouse gas forcing to cause any kind of climate change. Any temperature increases are too small to measure. There are five parts to this analysis:
1) It is impossible for the small decrease in LWIR flux (radiative forcing) at the top of the atmosphere to couple to the surface because of molecular line broadening effects in the troposphere.
2) There is no thermal equilibrium or steady state, so a change in flux has to be interpreted as a change in the rate of cooling (or heating) of a set of coupled thermal reservoirs. In the troposphere, at low to mid latitudes, a doubling of the CO2 concentration from 300 to 600 ppm produces a maximum decrease in the cooling rate, or a slight warming of +0.08 C per day. This is too small to measure in the normal temperature variations found in the turbulent boundary layer near the surface. 
3) Over the oceans, the penetration depth of the LWIR radiation is less than 100 micron (0.004 inches). Here it is fully coupled to the wind driven evaporation or latent heat flux. At present the annual increase in average CO2 concentration is near 2.4 ppm per year. This produces an increase in the downward LWIR flux to the surface of approximately 0.034 W m-2 per year. There can be no ‘water vapor feedback’ in the evaporation process at the ocean surface. Any increase in ocean surface temperature produced by an increase in CO2 LWIR flux is too small to measure.
4) Over land, almost all of the absorbed solar flux is dissipated within the same diurnal cycle in which it is received. There is a convection transition temperature each evening when the convection stops and the surface continues to cool more slowly by net LWIR emission. This transition temperature is reset each day by the local weather system passing through. Any surface warming produced by an increase in downward LWIR flux from CO2 is too small to measure.
5) There can be no CO2 signal in the global temperature record. The main term is temperature change from ocean oscillations, mostly the Atlantic Multidecadal Oscillation (AMO). There is an obvious peak near 1940 from the warming phase of the AMO. In addition, there is heating from urban heat island effects, changes to the weather station rural/urban mix and ‘adjustments’ related to homogenization. 
Any statistical analysis should start with ‘too small to measure’. This requires the determination of the short term signal to noise ratio and a proper analysis of the surface energy transfer. This involves signal processing or Nyquist theory, not the least squares analysis of a spurious signal.
The concepts of radiative forcing, feedbacks and a climate sensitivity to CO2 are pseudoscientific nonsense. The problems go back to the invalid assumptions of a steady state atmosphere and a fixed relative humidity distribution used by Manabe and Wetherald in 1967. 
More information on climate pseudoscience is available at:
A more detailed discussion of climate energy transfer is provided in the recent book ‘Finding Simplicity in a Complex World – The Role of the Diurnal Temperature Cycle in Climate energy Transfer and Climate Change’ by Roy Clark and Arthur Rörsch. A summary and selected abstracts including references relevant to this discussion are available at:

Jim Masterson
Reply to  Roy Clark
December 19, 2023 9:17 pm

“There is no thermal equilibrium or steady state . . . .”

They assume the nonsense LTE (Local Thermodynamic Equilibrium), so they don’t have to worry about atmospheric equilibrium..

Gary Pearse
December 19, 2023 11:08 am

“I noticed that the ANTH and NAT signals are always negatively correlated (the average correlation coefficient is -0.6).”

Ross, to a mineral processing-hydrometallurgical engineer, your statement above identifies the the action of the Le Châtelier Principle (LCP) 0in chemistry that states: with any perturbation of one (or more) components of a system of interacting components e.g chemical composition, temperature, pressure, enthalpy of three phases of of water, etc., all the other components react to resist the perturbing change, thereby greatly reducing its effect.

An easy to understand example is, when billions of gigatons of CO2 are emitted to the atmosphere, this raises the partial pressure of the gas in the atmosphere, which immediately causes increased solubility of it into the oceans. In turn, the increase of dissolved CO2, stimulates plankton and other ‘shell fish’s to use up the CO2 in making calcium carbonate shells, plus inorganic precipitation of carbonate that sinks in the sea. Another component – plants increase photosynthesis taking more CO2 back out of the system.

When the 1990 25 year forecast of warming proved to be 300% too high, I analyzed this as follows. As a first assumption, I assumed physicists’ had done (naively) a ceteris paribus (all else unchanged) calculation of CO2’s effect, unaware of LCP’s powerful marching orders! Surprise by Gavin Schmidt (“Models are running a way too hot and we don’t know why.”) convinced me they were sincere. I advised that their ceteris paribus estimate needed an LCP coefficient of ~0.3 to multiply it by. This is interestingly close to your -0.6 negative correlation coefficient!

I don’t want to be overly harsh on physicists here except to advise that climate science needs to be multidisciplinary if it wants to cross over into the realm of real science. A pure mathematican has no business designing an atmospheric modeI by himself. I am surprised, however, that chemists do not accord LCP much thought. To them it’s a little thing useful for divining which way a chemical reaction will go if you add something to a solution. Le Châtelier himself didn’t venture beyond the lab testube either, but somebody should have

December 19, 2023 11:45 am

 I mention this because if anyone objects that I’m not a “climate scientist” my response is: you’re right, I’m an economist which is why I’m qualified to talk about this.

Similarto McIntyre being a statistician and Mann not being one.

Joseph Zorzin
December 19, 2023 11:51 am

“… hailed it as the way to show a causal link between greenhouse forcing and observed climate change…”

hmmm… so, if it hadn’t show that link, would they have hailed it? They must have been thrilled to see what they were hoping to see.

December 19, 2023 12:15 pm

To date the CO2 derivative curve seems to have barely any response to the alleged 2023 temperature spike. This is current to September 2023. What’s Up with That? I generally find this quite useful as an unbiased temperature variation check – is it broken?

Reply to  JCM
December 19, 2023 12:16 pm
Reply to  JCM
December 19, 2023 12:17 pm

oops! lol anywayz

December 19, 2023 12:54 pm

Nice. I don’t understand things like this but I wonder if it would be practical to construct a demonstration using OLS, TLS and IV.

Richard Page
Reply to  Bob
December 19, 2023 5:02 pm

I think he’s working on it, give him time.

Jim Karlock
December 19, 2023 8:03 pm

Shouldn’t step ONE be to show that our current climate is unique compared to history as a prerequisite to doing attribution?

What do these methods say caused the Minoan, Roman and Medieval warm periods? Since they were all warmer than now, that should make a good test – can they honestly attribute them to nature and our current, cooler, period to CO2?

Not just CO2, but man’s CO2?


Tom Abbott
Reply to  Jim Karlock
December 20, 2023 3:55 am

“Shouldn’t step ONE be to show that our current climate is unique compared to history as a prerequisite to doing attribution?”


Climate change alarmists want to ignore weather history. It makes their claims of a climate crisis look a little demented.

December 20, 2023 1:47 am

In one of the links Allen has claimed that “…and [TLS] in turn has been largely superseded by the regularised regression or likelihood-maximising approaches, developed entirely independently.” 

Are these approaches any improvement?

December 20, 2023 3:56 am

This article is a perfect example of why the economics profession’s reputation is in the toilet. Economics has devolved into mathematical mass – turbation. For the US, as an example I follow, US economists, as a group, have never predicted as recession, Not even once.

Here a Canadian economist puts on a climate scientist hat and completely wastes his time and ours, examining wild guesses of the causes of global warming. No one on this planet knows exactly what each climate change variable did to the climate in the past 48 years of global warming. But we have a lot of official guesses stated with great certainty they do not deserved.

There are no data.

Statistics require reasonable accurate data.

With no data on the exact causes of post-1975 warming, these statistics are totally worthless.

The author concludes:

“based on estimates of the role of anthropogenic forcings on the climate that are likely heavily overstated.”

We already knew that in 1988 when the IPCC was formed to blame global warming on manmade CO2 emissions. This became very obvious in 1995 when the IPPC stated, with no data, their arbitrary conclusion that all natural causes of climate change were just “noise”.

If we were not sure in 1988, it became very obvious in 1995, that manmade causes of global warming were intended to be overstated for political reasons. The IPCC conclusion about natural causes of climate change came first, Then the IPCC did what they could to “prove” the politically acceptable to leftists conclusion. Which consisted mainly of the IPCC ignoring all contrary data.

Science + Politics = Science

Wild guess “data” + Statistics
= Wild guess “data”

Reply to  Richard Greene
December 20, 2023 4:02 am


Science + Politics = Politics

I accidentally typed the wrong leftist equation
Science + Politics = Science

Verified by MonsterInsights