This is highly technical post. No need to comment on that fact.

G.L. Browning’s paper, *The unique, well posed reduced system for atmospheric flows: Robustness in the presence of small scale surface irregularities*

was published in Dynamics of Atmospheres and Oceans

### About the article

In 1948, Jule Charney made a seminal contribution to atmospheric science by mathematically formalizing concepts related to large-scale atmospheric behavior. Specifically, he introduced a scaling of the atmospheric equations of motion. This scaling provided insights into the physical properties of the trough and ridge pressure systems, which are standard fixtures in daily television forecasts.

These systems possess distinct attributes, such as:

- Horizontal lengths of around 1000 km
- Vertical depths of about 10 km
- Horizontal velocities of 10 m/s and vertical velocities of 0.01 m/s
- Variations from their hydrostatic mean in horizontal pressure and density, usually around 1%

Charney’s approach highlighted the dominance of certain terms in the equations of motion. For instance, the horizontal pressure gradient terms and the Coriolis terms were an order of magnitude larger than other terms in the momentum equations. This suggested that these dominant terms must be nearly equal for the system to evolve according to its described properties. This balance, termed the “geostrophic balance,” remains fundamental in preparing meteorological data for forecast models today. Simultaneously, a finer balance, known as “hydrostatic balance,” is crucial for determining altitude from pressure for aviation purposes.

However, Charney’s proposition of the “quasi-geostrophic system” based on these balances faced challenges. Critics believed it inadequately represented the evolution of large-scale systems. The scientific community gravitated towards a model primarily hinged on hydrostatic balance, the so-called “primitive” or “hydrostatic system.” This model is predominant in most of today’s large-scale weather and climate modeling.

Later, in 1979, Heinz-Otto Kreiss enriched the field with a rigorous mathematical theory to better dissect hyperbolic systems with multiple time scales, encompassing atmospheric, oceanic, and plasma equations of motion. Kreiss’s theory elucidated increasingly accurate “reduced” systems that effectively captured the slow, large-scale motions of the atmosphere. Crucially, the popular hydrostatic system was found to misalign with some facets of Kreiss’s theory. He argued that a slight modification to the quasi-geostrophic system could have rendered it exceptionally precise.

One might pose a question: How does weather prediction using the hydrostatic system appear accurate? Scrutiny by researchers like Gravel et al. showed that significant errors arise from the boundary layer parameterization. Without ad hoc corrections, surface velocities can increase unrealistically. This error can proliferate and lead to a forecast deviation of about 50% within 36 hours. To circumvent this, fresh observational data is regularly fed into the model. However, such an approach isn’t directly translatable to climate models.

Of note, the hydrostatic system is less effective when restricted to a limited geographic scope, such as forecasting solely for the US. This challenge catalyzed deeper inquiry, culminating in the mathematical innovations pioneered by Professor Kreiss. His framework invariably ensures well-posed reduced systems for limited domains and has broader applicability, including mesoscale and equatorial atmospheric behaviors, oceanic dynamics, and plasma physics.

### Abstract

It is well known that the primitive equations (the atmospheric equations of motion under the additional assumption of hydrostatic equilibrium for large scale motions) are ill posed when used in a limited area on the globe. Yet the atmospheric equations of motion for large scale motions are essentially a hyperbolic system that with appropriate boundary conditions should lead to a well posed system in a limited area. This apparent paradox was resolved by Kreiss through the introduction of the mathematical Bounded Derivative Theory (BDT) for any symmetric hyperbolic system with multiple time scales (as is the case for the atmospheric equations of motion). The BDT uses norm estimation techniques from the mathematical theory of symmetric hyperbolic systems to prove that if the norms of the spatial and temporal derivatives of the ensuing solution are independent of the fast time scales (thus the concept of bounded derivatives), then the subsequent solution will only evolve on the advective space and time scales (slowly evolving in time in BDT parlance) for a period of time. The requirement that the norm of the time derivatives of the ensuing solution be independent of the fast time scales leads to a number of elliptic equations that must be satisfied by the initial conditions and ensuing solution. In the atmospheric case this results in a 2D elliptic equation for the pressure and a 3D equation for the vertical component of the velocity.

Utilizing those constraints with an equation for the slowly evolving in time vertical component of vorticity leads to a single time scale (reduced) system that accurately describes the slowly evolving in time solution of the atmospheric equations and is automatically well posed for a limited area domain. The 3D elliptic equation for the vertical component of velocity is not sensitive to small scale perturbations at the lower boundary so the equation can be used all of the way to the surface in the reduced system, eliminating the discontinuity between the equations for the boundary layer and troposphere and the problem of unrealistic growth in the horizontal velocity near the surface in the hydrostatic system.

For references relevant to this discussion see the article

Browning, G. L. “The unique, well posed reduced system for atmospheric flows: Robustness in the presence of small scale surface irregularities.” *Dynamics of Atmospheres and Oceans* 91 (2020): 101143.

“Challenging the Mathematical Basis of General Circulation Models”This does not challenge that basis. The basis is barely mentioned. Gerald B describes an old method once suggested by his PhD supervisor, which has never been made to work.

“The requirement that the norm of the time derivatives of the ensuing solution be independent of the fast time scales leads to a number of elliptic equations that must be satisfied by the initial conditions and ensuing solution. In the atmospheric case this results in a 2D elliptic equation for the pressure and a 3D equation for the vertical component of the velocity.”Well, many CFD methods do that. But GCMs don’t need to. They resolve the fast time scales (hence the relatively short time steps of twenty minutes or so) and they do use a hydrostatic relation to replace the vertical velocity component in the momentum equation. That’s not saying that there is no vertical motion. It says that vertical

accelerationis an insignificant component in the momentum equation, which can be easily checked.It has also been shown in a peer reviewed manuscript that the improved quasi-geostrophic system accurately describes mesoscale system development as the 3d elliptic equation for the vertical component of the velocity w provides the right balance between the total heating and w. This was also demonstrated to be the case in an actual mesoscale model by Christian and Zwaxk.

Large scale weather models actually suppress the fast time scales thru initialization ( nonlinear normal mode initialization) because they mess up the physical parameterizations. The mathematical theory developed by Professor Kreiss shows how to prepare the initial data to suppress the fast time scales in the initial data in the most efficient manner., namely by satisfying the elliptic equations in my manuscript.The large scale solution must satisfy those elliptic equations if it is to continue to evolve according to its prescribed characteristics. Thus the reason that the reduced system is so accurate.

Engineering, in part, is the determination as to which parameters are insignificant (or more easily represented by a combining other parameters).

‘That’s not saying that there is no vertical motion. It says that vertical

accelerationis (ASSIGNED AS … not just assumed to be) an insignificant component in the momentum equation, which is then be waived off by saying chaos is easy to model (and hope no one notices)’Most, real, practicing engineers think you are full of crap. That lying shit for brains academic at Stanford that is clueless as to how the real world works.

Don M,

Sometimes making a physical assumption that something is small can have adverse effects on the mathematics. In the case of meteorology the neglect of the time dependent equation for the vertical component of the velocity and ensuing assumption of hydrostatic equilibrium led to the ill posedness of the initial/boundary value problem when using those equations for forecasting over a limited area on the globe. That started the mathematical search for the root of this problem and the ensuing theory for hyperbolic systems with multiple time scales introduced by Professor Kreiss.

Jerry

and sometimes, prior to making any assumptions at all, the modeling purpose needs to be clearly understood & stated.

modeling for the sake of modeling is simply art & success is again, simply, dependent on the eye of the beholder.

Indeed.

I can’t quickly find a figure for how much energy is transported vertically and poleward by convection (ie. the Hadley Cell etc.) My guess is that vertical convection is responsible for the majority of the atmospheric heat that moves upward and poleward.

The vertical velocity quoted above may be OK as an average but my guess is that it wildly understates the vertical velocity at the equator for instance.

So, to your comment, I would add a spatial component. Because of the Hadley cells, the Mid-latitude cells, and the Polar cells, there are large vertical components at the equator, 30 and 60 deg. (north and south) and the poles. link

The balanced flow at the equator has been addressed in a separate manuscript and is similar to the balance in mesoscale storms, i.e.,

the vertical component of velocity is proportional to the total heating.

So the Charney scaling for large scale flows in the midlatitude does not apply there. But Charney also published a manuscript discussing equatorial flows.

BTW for the climate models to get the equatorial evolution correct they would have to get the transfer of energy between the surface of the ocean and the atmosphere correct and this is poorly (if at all) understood.

…which came first, the storm following the valley, or the valley guiding the storm?

I suspect we are not paying enough attention to the weather underground, and how that fits in with the weather on the sun. Climate is a symptom, not a cause..?

Yep! Wake me up when they can explain the following. I have no idea where an when this photo is from, but I have seen similar, and on that flat, flat landscape, drive towards that cloud, and you will find a tiny hill, not a hundred foot high, a little bulge on the landscape. Enough to cause this type of thing:

Anyone who lives on flat land, like me in Kansas, knows that many storms, not all but many, are affected by things like river valleys. They will follow the valley rather than cross it.

Without knowing the source of this photograph or the meteorological conditions at the time, it will be hard to understand this phenomenon. Please supply more info.Thanks.

Here’s some background information on that particular photo:

These get called flying saucers or UFOs a lot, not only because of their lens shape, but since they also appear to hover. The wave in the air that is causing these lenticulars is maintaining a crest in a constant location. So, lenticulars stand in one place for a period of time.You don’t see the mountain range in the shot, but that is likely what is causing this. High winds out of the west and northwest hit the mountains or other obstruction, and causes a disruption in the flow of air, making a wave downstream of the obstruction.A smooth cloud forms at the crest of the wave where the air meets the dewpoint, and as the air falls down the other side of the wave, evaporation occurs.Lenticulars can form at any level of the atmosphere, but this variety is altocumulus.There have been very strong winds in Wyoming the last couple days. There is a High Wind Warning in parts of Wyoming Tuesday with 35 to 45 mph sustained winds and gusts near 70 mph.So there you go – these aren’t UFOs but rather lenticular clouds caused by a unique combination of wind and the mountains.Source

If there is a mountain range that is not easy to see in the photograph

your explanation makes perfect sense. Thanks for the additional info!

Pagé, Christian, and Peter Zwack. “1.24 NONLINEAR BALANCE AT MESOSCALE: BALANCED VERTICAL MOTION WITH RESPECT TO MODEL CONVECTION TEMPERATURE TENDENCIES. A REAL CASE STUDY AT 2.5 KM.”

Weather prediction models suppress the fast time scales because they cause strange behaviors of the physical parameterizations. They do so by modifying the initial data using nonlinear normal mode initialization. Kreiss introduced a rigorous mathematical method to accomplish this and it requires solving the elliptic equations in my manuscript. For the large scale solution to evolve as per the specified characteristics, the solution must continue to satisfy those elliptic equations at a later time.

Thus the modified quasi-geostrophic is extremely accurate, unlike the hydrostatic equations that diverge from reality by 50%:in 36 hours (Gravel et al.).

Note that the manuscript by Page and Zwack shows that the dominant part of the solution

of a mesoscale storm in an operational mesoscale model satisfies the balance between the vertical velocity and total heating as predicted by an earlier manuscript by Browning and Kreiss.

Before the latter manuscript, initialization of mesoscale models was not understood. The modified quasi- geostrophic system correctly generates this balance for mesoscale storms ( see references in my manuscript).

Nick you have not read the manuscript because the modified system in my manuscript is not the same as the Charney quasi-geostrophic system.

Nick is performing his assigned duties.

Stokes is at his ‘best’ when he is Nick Picking.

You don’t really think the first guy to comment actually checked anything first, do you?

I have now been able to add a comment right after Nick’s that should be quite embarrassing.

Tell me you finally figured out how blog commenting works without telling me you figured out how blog commenting works.

“But GCMs don’t need to”That’s because they are based on conjecture and imaginary physics.

The Grimm Bros didn’t need to prove that the Big Bad Wolf actually existed, either.

Where’s the part about clouds?

Geez, they do all that and STILL get reality so wrong.

What’s Up With That?

The hydrostatic system of equations has never been proved to be an accurate approximation of large scale motions of the atmosphere because it is not possible (it is actually ill posed for the smaller scales of motion for the initial vale problem).Your “proof” is that the vertical acceleration

is small thru scaling arguments by Charney leading to the hydrostatic assumption and the hydrostatic system.That is not a proof. In fact if you read my manuscript it points out that the only way that the large scale solution described by Charney can be proved to continue to evolve with the same characteristics is thru the rigorous theory developed by Professor Kreiss. You have made the same mistake that the meteorologists did when they switched to the hydrostatic system. Kreiss mathematically proved that the multiscale hyperbolic system agrees with the full dynamical system up to 1%. Then his theory proves that the modified quasi-geostrophic system in my manuscript approximates the large scale solutions of that hyperbolic system up to 1%. By including the extra term

In the time dependent equation for the vertical component of vorticity in the elliptic operator for w and doubling the models resolutions, the error between the two systems reduced to 2% exactly as predicted by the Kreiss theory (the reviewers are aware of this result).

Jerry

The hydrostatic system of equations has never been proved to be an accurate approximation of large scale motions of the atmosphere because it is not possible (it is actually ill posed for the smaller scales of motion for the initial vale problem).Your “proof” is that the vertical acceleration

is small thru scaling arguments by Charney leading to the hydrostatic assumption and the hydrostatic system.That is not a proof. In fact if you read my manuscript it points out that the only way that the large scale solution described by Charney can be proved to continue to evolve with the same characteristics is thru the rigorous theory developed by Professor Kreiss. You have made the same mistake that the meteorologists did when they switched to the hydrostatic system. Kreiss mathematically proved that the multiscale hyperbolic system agrees with the full dynamical system up to 1%. Then his theory proves that the modified quasi-geostrophic system in my manuscript approximates the large scale solutions of that hyperbolic system up to 1%. By including the extra term

In the time dependent equation for the vertical component of vorticity in the elliptic operator for w and doubling the models resolutions, the error between the two systems reduced to 2% exactly as predicted by the Kreiss theory (the reviewers are aware of this result).

The hydrostatic system of equations has never been proved to be an accurate approximation of large scale motions of the atmosphere because it is not possible (it is actually ill posed for the smaller scales of motion for the initial vale problem).Your “proof” is that the vertical acceleration

is small thru scaling arguments by Charney leading to the hydrostatic assumption and the hydrostatic system.That is not a proof. In fact if you read my manuscript it points out that the only way that the large scale solution described by Charney can be proved to continue to evolve with the same characteristics is thru the rigorous theory developed by Professor Kreiss. You have made the same mistake that the meteorologists did when they switched to the hydrostatic system. Kreiss mathematically proved that the multiscale hyperbolic system agrees with the full dynamical system up to 1%. Then his theory proves that the modified quasi-geostrophic system in my manuscript approximates the large scale solutions of that hyperbolic system up to 1%. By including the extra term

In the time dependent equation for the vertical component of vorticity in the elliptic operator for w and doubling the models resolutions, the error between the two systems reduced to 2% exactly as predicted by the Kreiss theory (the reviewers are aware of this result).

Jerry

Charles Ross wii not let me post a later comment of mine to refute the nonsense of Nick Stoke’s comment.

Rotter

My comment above should be posted right after Nick Stokes first comment and I have tried to do so multiple times, but Charles continues to prevent that from happening.

Stop speaking of things of which you have no understanding.

If you continue to accuse me of things I have not done I will block you from commenting or perhaps delete this entire post.

I’ve taken enough of this from you in emails. For you to spread ignorant, incorrect accusations in public is a bridge too far.

I need to publicly apologize to Charles. I wanted to write a post that appeared immediately below Nick Stokes initial post so that there could be an immediate rebuttal to his statements. I now see that those posts are moved further down to the end of other posts responding to Nick. That is unfortunate but my mistake. Charles is within is rights to cancel this entire thread.

Gerald Browning

The mistake made by Nick Stokes is the same as by the modelers that switched to the hydrostatic system. Scaling by itself does not mean the continuum solution will continue to evolve with the desired characteristics. That can only be ensured by the mathematical theory introduced by Professor Kreiss. In particular, his theory requires that a number of time derivatives be of the order of the advective terms. A perfect example of this requirement is the well known need for quasi-geostrophic balance for the first order time derivative of the horizontal momentum equations. The mistake that was made in the original quasi-geostrophic system was that the first order time derivative of the pressure or continuity equation did not satisfy the Kreiss condition, namely that the horizontal divergence be determined from the vertical velocity. The vertical velocity can be obtained from the well known 3d elliptic omega equation and then the the horizontal divergence computed from the vertical velocity as prescribed. The shortcoming of the original quasi-geostrophic system was that the horizontal divergence was not included in the computation of the wind. When that is done the modified quasi- geostrophic (reduced) system is extremely,accurate.

“The mistake made by Nick Stokes is the same as by the modelers that switched to the hydrostatic system.”No modellers ever “swtiched” to the hydrostatic system. You can’t avoid it. The CFL condition at those time steps reuires a grid length of at least 100 km. You can’t have that in the vertical.

Fortunately the pressure Poisson equation is sovable in the vertical, and the hydrostatic is the solution.

Yes you can avoid it by using the unique, accurate well posed reduced system.

That system is not sensitive to errors like the hydrostatic system that deviates from reality by 50% in 36 hours.

And as a bonus the reduced system also treats evolving mesoscale storms correctly.

The first successful numerical prediction was performed using the ENIAC digital computer in 1950 by a team led by American meteorologist Jule Charney. The team include Philip Thompson, Larry Gates, and Norwegian meteorologist Ragnar Fjørtoft, applied mathematician John von Neumann, and computer programmer Klara Dan von Neumann, M. H. Frankel, Jerome Namias, John C. Freeman Jr., Francis Reichelderfer, George Platzman, and Joseph Smagorinsky.[5][6][7] They used a simplified form of atmospheric dynamics based on solving the barotropic vorticity equation over a single layer of the atmosphere, by computing the geopotential height of the atmosphere’s 500 millibars (15 inHg) pressure surface.[8

From Wikipedia. Note the model was not based on the hydrostatic system. Modelers switched to that later on.

Nick has the CFL criterion reversed. It is the grid size that determines the time step.

No, the CFL condition merely restricts the ratio. If you specify a time step, it tells you the minmmum grid size you can have.

Yes it restricts the ratio. But normally computer power restricts the mesh size and then the CFL determines the time step, not the other way around.

Because the modelers claimed the quasi-geostrophic system was not accurate, Smagorinsky started using the hydrostatic system. It was avoidable had they made the minor adjustment of including the horizontal divergence in the wind.

Nick, because “you are a legend in your own mind” and on a par with Professor Kreiss, please provide a proof that a solution of the hydrostatic system is close to the

corresponding solution of the full hyperbolic system. And scaling is not a proof as has been clearly pointed out.

The atmosphere is not incompressible (see explanation of derivation of pressure Poisson equation below). And Nick evidently does not know the meaning of unique – one and only one and it is not the hydrostatic system.

The definition of the pressure Poisson equation:

We take the divergence of the momentum equation and then apply the incompressibility constraint. After some wrangling and cancellations, this leaves us with the pressure Poisson equation:

What a joke. I wrote out a comment using the Kreiss theory for hyperbolic systems that showed all of the mistakes in Nick’s comment and Charles deleted it . What an asshole.

Retracted because I was wrong.

Proving a mathematical theorem does not mean the corresponding physics is correct.

If the dynamical system is incorrect, how can the physics be correct if it depends on the dynamics, in particular the vertical velocity?

Obviously Nick did not read the manuscript, but just spouted out the same wrong argument used when the modelers rejected the quasi-geostrophic system. The original quasi- system did not include the horizontal divergence in the wind. When that is done the system is very accurate, unlike the hydrostatic system.

When climate models have the vertical resolution to accurately simulate convective instability, the may prove useful. The GISS model has 20 vertical layers. It needs at least 200 to locate where a level of free convection will form. Until then they are useless.

If they could simulate convective instability then no one would dispute the fact that ocean surface cannot sustain more than 30C with the present atmospheric mass. And CO2 plays negligible role in the energy balance.

Climate has always changed and always will. The true climate deniers are the climate clowns who believe Earth was at an energy equilibrium in 1850.

Topical right now is flooding in Libya. I have been pointing out the increasing risk as the Med warms up for some years now. A good article here on past greening of the Sahara:

https://www.nature.com/scitable/knowledge/library/green-sahara-african-humid-periods-paced-by-82884405/

The NH has been exposed to increasing summer sunlight for 500 years now. The impacts are observable.

To quote George Box again as many here have already done numerous times: “All models are wrong, but some are useful.” GCMs have evolved from the useful stage to the fantasy stage.

And many other eminent statisticians!

The models cannot be accurate no matter the resolution if they are based on the wrong set of dynamical equations. Nor can they be accurate when violating the basic tenets of numerical analysis by using discontinuous forcing.

Since they have gotten climate to be redefined as only 30 or so years, it will always be changing.

Thirty years is nonsense–like most of climate science.

It isn’t in climate science.

Heh heh heh!!!

You mean Climate “Science”?

Climate Seance

Good one.

Sciencery: A philosphy where the truth is determined by funding.

Sciencer: A person who will, for a fee, make any product or service walk on a cloud of scientific respectability.

It was defined as 30 years, around the time of very bottom of the 60+ year AMO cycle.

The reference period for “anomalies” on many charts still uses the period around the “global cooling” scare.

Sort of like starting temperature measurements in the Little Ice Age. 😉

Thank goodness it got warmer !!!!

This is one of the things that has always bothered me. It is a simple assertion that by changing CO2 levels we can return to a prior climate state. Where is the proof that this assertion is true? It is insane to spend trillions of dollars without a guarantee that the climate will change to something that is BETTER than what we currently have.

And they can/will never tell you what the CO2 level should be.

Allow me to quote Pat Frank here:

‘In the physical sciences, an explanatory construct rises to the level of theory if it allows deduction of a unique observable or of a unique experimental outcome. If the observation does not materialize, or the experiment produces an alternative result, the theory has failed its test. Multiple such failures definitively falsify the theory’

The ‘theory’, in this case, is that CO2 is the control knob of the climate. It is falsified because it does provide a unique result whether we look at paleo data or even the garbage output of the GCMs.

Picking up on your comment, it is therefore a valid assertion that we cannot return to a prior climate state by changing CO2 levels.

2nd sentence of 2nd paragraph should read ‘does NOT provide’.

I miss the edit function….

Pat Frank has published a manuscript on the impact of accumulating parameterization errors in climate models that is worth a read.

Very much so. It never fails to get a rise out of the resident alarmists when the subject comes up.

Exactly.

What Pat and you are basically saying is that a good valid hypothesis must contain a functional relationship that returns a unique value for a given input. IOW, a given CO2 concentration will determine a given temperature. We know that is not the case.

‘IOW, a given CO2 concentration will determine a given temperature.’

Your summary is much more consise than mine, but yes, that is what the ‘control knob’ crowd asserts, at least under their previous paradigm of ‘global warming’.

Of course, now that they’ve moved on to ‘climate change’, they’re implicitly saying that a given CO2 concentration will determine a given climate.

This seems to improve ‘limited domain’ weather models—say for a region of the US. But the post says the ‘new’ methods are NOT applicable to climate models.

The climate models are hopeless and always will be. Because of the CFL constraint (translated, NCAR says halving the grid scale means 10x the computational effort), they cannot ever run on 2-4km scales to model convection using CFD. 6-7 orders of magnitude computationally intractable. Weather models for a region can and do. So climate models must be parameterized, the parameters tuned to best hindcast 30 years per a CMIP requirement. Tuning drags in the natural variation attribution problem, which is insoluble once one recognizes natural variation provably still exists.

Akasofu had a nice paper on this published 2010. Used his footnoted illustration in a climate essay in ebook Blowing Smoke. Ignored by the climate modeling community.

Hindcasting to set parameters is a real problem for the climate models.

The data they use in their hindcasting is massively corrupted by urban warming, and by their own agenda driven “adjustments”..

That means the data is totally NON-representative of past global temperatures.

And it just gets worse, and worse from there. !

There hasn’t been a Grand Solar Minimum, like the Sun has just started, in the last 30 years. The last one was about 350-400 years ago. I’d guess the climate models aren’t parametrized for that.

We will see if the next solar cycle is low.

Remember, that solar forecast is built from models as well.

S25 has been higher than some solar models suggested it might be.

“NCAR says halving the grid scale means 10x the computational effort”

I believe they try to halve all dimensions–including the time increment. So that means an increase of 16 times–x, y, z, and the time increment.

The factor of 16 is correct. But resolution will not help if the dynamical equations are wrong.

The reduced system has no fast time scales so can use a much larger time step, but at the expense of solving the two elliptic equations.

I appreciate you confirming my comments–thanks.

Jim,

It is not a well known fact that the physical parameterizations take more time than the dynamics. That is because the dynamics are easily vectorized but not the parameterizations because they have all kind of if statements and exponentials.

Both global forecast and climate models are based on the hydrostatic equations and thus the wrong dynamical system of equations. Thus neither is the correct accurate well posed reduced system.

The 20-minute time steps are a laugh. Many things, but especially clouds, can change significantly in 20 minutes, and thus the local albedo. One-hour steps would probably be a better match to the parameterizations used.

You can’t do 1 hour steps because of the CFL condition. Cloud changes are not part of the pressure/volume fast time scale and so averaging over the step works fine.

“so averaging over the step works fine”LOL, averaging…. works so well for the massive model spread, hey Nick. ! 😉

Thanks for admitting the

climate models are totally unrealisitic.“Cloud changes are not part of the pressure/volume fast time scale”Ah, so climate models don’t do clouds properly even in their nonsense physics routines.

What are they good for except propaganda pap ?

We had clear sky yesterday late evening, no clouds in sight. 45 minutes later in rained in a thunderstorm.

Ah yes, using the only tool in the climate scientist’s toolbox — the average.

Clyde,

The models cannot resolve small scale features.. Instead they use columnar (discontinuous ) parameterizations. This means the corresponding continuum solution would be discontinuous. But numerical analysis requires that the solution be differentiable in order to apply numerical approximations of derivatives. So the modelers add unrealistically large dissipation terms . Sign onto The Weather App and watch the sudden change from current radar to forecast radar and notice the sudden discontinuity in the size and intensity of the data.

How is the albedo handled? Is the global average treated as a constant?

The statement is that weather models insert new observational data every 6-12 hours to keep the hydrostatic model from going off the rails due to the ad hoc boundary layer dissipation needed to keep the surface velocity from growing unrealistically. This is due to the vertical integral propagating errors instantaneously ( the behavior of the reduced system elliptic equation for w is to actually suppress small scale errors at the surface). This same insertion of data is obviously not possible in climate models predicting the future. So what keeps them from going off the rails – nothing. The only way they continue to run is excessive dissipation leading to a large continuum error.

Anyone who understands anything about statistics and computer models knows — or should know — that any model of a non-deterministic system, such as climate, with numerous variables — many of which are co-linear — is never going to be reliable for predictive forecasting.

This is a 100% fact that nobody, even Nick Stokes, can dispute. Quite how climate ‘scientists’ can claim otherwise is beyond me and should be beyond anyone with an ounce of brain cells.

Given this fundamental reality you can talk about this stuff with as much fancy language as you like, but it will never alter the fact that climate predictions made using models are totally and utterly worthless; and the most powerful supercomputers available now or in the future will never change it.

I think you will find that non-linear, chaotic systems are deterministic. That’s the surprising thing about chaos, that a system whose next state depends on the current state can be chaotic.

Well, that dismisses not only numerical weather forecasting, which is tested on a daily basis, but the whole engineering business of CFD.

Only yesterday WUWT ran a post by Cliff Mass describing the extraordinary accuracy of the numerical prediction of the path of hurricane Lee.

Nick,

Are you suggesting that rather than report a single success, WUWT should have reported the many, many cases of failure to usefully predict hurricane tracks?

Are you supporting the modern climate philosophy that achievement of “ambition” is more desirable than achievement of correct physics, chemistry and mathematics?

The modern method is lovely because proponents can fail to show failures. When did we last see an admission that just about all past predictions of future climate by a given date have failed?

Geoff S

“WUWT should have reported the many, many cases of failure to usefully predict hurricane tracks?”Would you like to report one (1)?

You’re pulling our chain aren’t you Nick?

This site doesn’t provide enough gigabytes for a single comment upload of all the cyclone / hurricane / typhoon forecasts that have been duds.

I think Nick mind only sees a couple of days back in time.

A form of memory dementia. !

One (1)

Many, many, many…. seems you just haven’t been paying attention.

Too busy admiring yourself in the liar’s mirror?

Lol, weather forecasting, really

Weather forecasting is based on short term known, measured values, and is consistently checked of accuracy against reality. (basically the very opposite of climate models)

Even then,

accuracy of forecasts can vary greatly from model to model.(as was shown in the recent topic)

You have yet again shown that you are basically clueless how engineering models work and how

they are constantly validated.Climate models have NEVER been validated against any reality.

And they never will be.

Weather models are updated continuously with Doppler radar, geosynchronous satellite imagery showing clouds and temperatures, ground weather stations providing temperatures, winds, humidity, pressure, clouds, and precipitation, and weather balloons. Yet, I see precipitation forecasts 5 days out changing from day to day, and sometimes even being wrong on the target day. I’m not impressed with the vaunted accuracy of weather forecasts.

At least the hurricane modelers show an increasing uncertainly with time of the path of the hurricane due to inherent model errors.

A few years ago one model showed a hurricane going north and another one for it going south along the East coast. I guess you forgot to mention that case.

Not enough space on any page to show climate model uncertainty. ! 🙂

Even if you could, it would be totally meaningless, because the models are totally meaningless.

binice2000, methinks you are confusing climate forecasting with weather forecasting. Climate by definition is a long term average of weather data. Short-term models used to forecast hurricane tracks do work pretty well, but often disagree. NHC forecasters are well aware of that and respond by using their best judgement to publish a track forecast.

?? That is what I have been saying.

Methinks you mis-interpreted my comments 🙂

“Short-term models used to forecast hurricane tracks do work pretty well,but often disagree“Umm.. do you want to re-word that statement. 😉

This was also the case at NOAA in Boulder. They would look at multiple model outputs and any,available in situ data and then make a best guess.

No argument from me!😀

My impression of daily weather forecasts is that they do a good job with temperature (Which, because of the seasons, shouldn’t be a surprise.) and with winds and clouds. However, the rain forecasts appear to have a high false-positive error rate for large areas, except for the Pacific Coast of the US. As Geoff, below, alludes, a single good prediction says nothing about the average success. Nor does it consider the difference between false-positive and false-negative error rates.

OK, now try predicting the tracks of next year’s hurricanes.

Nick, numerical weather forecasting might have improved significantly but remains notoriously unreliable in areas where the variables change constantly, such as the UK. Where I live the forecast is pretty much useless — and that’s remarkably close to an airfield weather station. As for hurricane Lee, are you seriously claiming it’s now possible to predict hurricanes with anything close to the accuracy claimed by climate models?

Frankly, your dismissive response reveals your total ignorance on the subject, This is precisely what I find when I challenge climate scientists about the lack of confidence intervals associated with computer models. In truth, you’d achieve the same level of accuracy by throwing a couple of die.

Just what does a 50/50% of rain mean? Maybe it won’t, maybe it will? Flip a coin! That’s just for tomorrow too.

You might do *better* rolling a couple of dice!

Markw2,

I am just providing hard evidence that the claims that climate models predict the future cannot be true if they are using the wrong dynamical system, let alone questionable parameterizations and numerical methods.

I am seeing a lot of discussion here about whether models are useful or not. Well, that depends. Atmospheric models are useful for forecasting, but only within their design limits. There are two problems: it is impossible to collect enough data about the atmosphere, and impossible to accurately set the initial conditions for a perfect model run. This has been known since the early 1960s, when NCAR researcher Edmund N Lorenz discovered it and published a seminal paper on the subject. It remains true. His work became the basis for chaos theory. Atmospheric models basically show probabilities; not certainties.

Fast forward to today. Forecasters do use models, but do not mindlessly depend on them. And models looking at the same situation rarely, if ever, show identical results. Daily weather forecasting models are pretty good out to 36 hours. After that they steadily degrade until they are largely useless about 10 days out. That’s the medium range forecast.

Back in the 80s, NOAA put considerable treasure into improving forecasting by modernizing satellites, radar, and models. They succeeded, but what does that mean? For tornadoes it meant improving reasonably accurate track forecasts extending from about 8 minutes to 12, sometimes out to 15. For hurricane tracks, it meant going from 3 days to 5. Those are significant improvements that can save lives. But even today, you will see forecasters discuss how the various hurricane models they use agree or disagree, and then use their judgement to issue a forecast. Intensity forecasting is not very good for either. Why? Inadequate data and theory about what happens inside the storm. That is still a problem.

The NWS forecasts climate and gets help from models. But their climate forecasts only goe out 3 months. And only discuss probabilities of climate being higher or lower than average for temp and precip.

There is nothing wrong with atmospheric models or using them. But everyone must keep their limitations in mind. As always, extrapolation is risky.

The GCMs are gigantic exercises in extrapolation.

Chaotic systems possess three properties (among others): they are non-linear, they are deterministic, and they have something called a horizon of predictability. That period depends on the system. Planets may have a horizon of predictability of around a million years or so. It’s estimated that weather systems have a horizon of predictability of around two weeks. No matter how accurate the initial conditions are, how good the equations are, and how good the implementation is, the horizon of predictability limits how far into the future your weather prediction will be.

Julius,

The model that Lorenz used was a severely truncated approximation of the dynamical equations.

The full system is essentially hyperbolic and not chaotic.

Gerald, the modern models are also truncated. Lorenz was using the best model he had. Doesn’t matter; no amount of modeling will ever match the atmosphere. That is what Lorenz demonstrated.

Lorenz used a 3 variable equation. He had no illusions that he was modelling the atmosphere. He wanted to demonstrate something about nonlinear differential equations.

Nick, you know better than that.

Well, the famous butterfly equations had 3 variables.

I have a post here, with a solver and a visualiser. And yes, he did later get up to 12 variables. And he wrote a famous paper on the general circulation of the atmpsphere, which had a section on recent experiments by Smagorinsky and Manabe, about which he was quite upbeat.

Yes, but that was after he noticed that entering truncated figures to resume a run of a weather model had give markedly different trajectory.

It was one of those “hmm,

that’sstrange” moments, and full credit to him following up on it.The 3-variable equations were his simplified approach to trying to work out what was happening.

It was a great demonstration of the power of curiosity.

Might I suggest that his observation might be due to the fact that the use of discontinuous forcing spreads energy to the shortest wave lengths instantaneously and that could have. an adverse impact on the if tests in the parameterizations. For a hyperbolic system with differentiable forcing that is properly resolved, I believe a numerical method that is accurate and stable will converge to the true solution for a period of time, i.e., this type of behavior would not occur..

That probably wasn’t the case.

The wikipedia article on the butterfly effect has a quote from the man himself about what piqued his interest.

The observation dates back to early weather models in the early 1960s.

From your reference:

He discovered the effect when he observed runs of his weather model with initial condition data that were rounded in a seemingly inconsequential manner. He noted that the weather model would fail to reproduce the results of runs with the unrounded initial condition data. A very small change in initial conditions had created a significantly different outcome.[3]

This would not occur in a well posed hyperbolic system, I.e., the Lax theory states very clearly that in that case an accurate and stable numerical method will converge ( not be sensitive to small errors).

Weather models have always used discontinuous forcing that means the corresponding continuum solution is discontinuous. This violates the basic requirements of numerical analysis so can indeed lead to strange behavior.

The weather model software he was using over 60 years ago almost certainly left a lot to be desired.

The software was likely to have been written in FORTRAN, but assembly language wasn’t uncommon back then.

The bottom line is that

is giving too much credit to the technology available in the late 1950s and early 1960s.

Old cocky,

The same sensitivity exists in todays models. Experiments with a change in the last bit leads to different results (Williamson).

Considering the amount of dissipation in the models, it would seem

they would behave more like a heat equation and damp all errors?

That is why I suspect that the discontinuous forcing is suspect?

I was referring purely to Lorenz’s observations of sensitive dependence on initial conditions, and the software in which he observed the phenomenon.

Old Cocky,

Understood. And it could be attributed to more primitive hardware and/or software during that time. Was he solving the hydrostatic system or a simpler system?

Jerry

Do you know of a reference where he described the model he was using?

I would think it was a descendant of Charney and von Neumann’s ENIAC program you referred to above, as Lorenz was at MIT as well.

From the Lorenz quote in the Butterfly article, it had evolved 2 months of weather in an hour or so, so it can’t have been particularly complex.

Most people just can’t grasp how limited the processing power or storage of those early computers were. The computer being used had vacuum tubes, so if it was lucky it had wire-wound core memory and mag tape.

Ah, the wikipedia article on the Lorenz system gives a bit more info. It was a convection model, incorporating 3 equations.

Old Cocky,

I found a reference where Peter Lynch recreated the ENIAC model.

It was a version of the barotropic model. I will read it in more detail.

I want to find out if they just ignored the horizontal divergence because it is normally assumed that it is small at 500 mb. Also if there were any physical forcing terms. If Lorenz did not use this model, but only his 3 ODE’s, then his chaos statements about the atmosphere may not be correct.

https://journals.ametsoc.org/view/journals/bams/89/1/bams-89-1-45.xml?tab_body=pdf

Thanks for that link. I’ll read that a bit later.

The history of computing was one of my interests when I was younger, but work and life seemed to intervene.

I may be wrong, but it always seemed to me that CSIC said more about the limitations of numerical analysis than it did about the actual system being modelled.

Oh, how cool. The 1963 Lorenz paper is available for download – https://www.semanticscholar.org/paper/Deterministic-nonperiodic-flow-Lorenz/b021e8cf155a7c4c8244506c7caaa99bea0eaac9

I wonder if the 1972 paper is as well.

MIT has a full list of Lorenz’s publications. He was rather prolific – https://eapsweb.mit.edu/about/history/publications/lorenz

Old Cocky,

I thought the Eniac model was a barotropic vorticity model. I guess the question is how was the 2d vertical component of vorticity equation updated if that is the time dependent equation they used.

Interestingly the reduced system’s only time dependent equation is the 3d version of that equation, with the horizontal divergence and total wind determined as stated above. It is extremely stable if the physical forcing is sufficiently differentiable.

Old cocky,

Can you tell me a bit about yourself in an email? I might want to take this off line for reasons I will tell you about.

Hrhrbb@gmail.com

Jerry

“no illusions that he was modelling the atmosphere.”Yet the climate modellers “think” they are…

…. when in fact they should be “under no illusions that they are actually modelling the atmosphere.

You do know the climate models are only an illusion, don’t you Nick !

I gave Nick a hard time for looking in the wrong place, so should be even handed.

Lorenz’s 3-variable equations were his trying to understand the phenomenon of sensitive dependency on initial conditions (chaos).

That work was triggered by early weather forecasting software, but had no direct relationship with weather or climate.

All I am saying is that the Lorenz set of equations is misleading relative to the nature of the full hyperbolic dynamical system. The numerical solution of the full dynamical system will eventually lose accuracy and even the continuum solution can become

unresolvably small scale due to nonlinearity. The Kreiss theory can prevent the resonance between the different frequencies for a finite time, but not forever.

“Atmospheric models basically show probabilities; not certainties.”. I disagree, they pretend to show probabilities. The evolution of the system is chaotic, probabilities are based on past behaviouyr and extrapolated. Solutions of chaotic model require future evolution, the past is irrelevant over the long term, even if no parameters are changed eg. CO2 levels. Google the simple 3-body Newtonian gravitational model. It’s only a simulation but it illustrates the uselesness of probabilities to model long term future evolution in chaotic systems. Add to that the roundings that occur in computer operations and deviation from reality will happen even more quickly.

Models are very useful, Julius, and if climate ‘scientists’ used them, first, without making outrageous claims of their predictive capabilities for years and decades into the future; and, second, provided confidence intervals for the results they produce, that would be fine.

The problem is that we have results from models and predictions made for ridiculous periods into the future presented as facts. This is an appalling indictment of climate science and climate scientists who should be calling out the way that results are presented. The only assumption one can come to is that the climate ‘scientists’ themselves are happy to go along with this, presumably because of the gravy train that goes with it.

One problem that is rarely mentioned is that meridional (poleward) heat transport at the mid and high latitudes is essentially dependent on synoptic and mesoscale weather systems. Since we lack a solid theory describing heat transport by the atmosphere, the result is that climate models can’t properly reproduce meridional heat transport. This transport is the most fundamental climate variable from which the many climates of the planet, the latitudinal temperature gradient, and the average temperature of the planet result. Changes in meridional transport are responsible for Arctic warming in the 21st century, but models can’t reproduce that, so climateers ignore it.

Model defects result in a wrong representation of how the climate functions and changes, and since climateers are utterly dependent on them they are trapped in a wrong paradigm of how the climate changes.

The Hubris intrinsic to this and all models is simply off-the-scale mind-blowing, the wrongness is grotesque and deeply concerning.

It is that ‘some people’ imagine they can reduce what is effectively A Living Thing to a set of mathematical instructions is beyond crazy

This is a living thing with nigh-on 5 billion years of data (information) stored within it and uses that data on a day-by-day and second-by-second to inform its behaviour – again – minute-by-minute

And it uses ALL that data, all the time, in real time, in parallel and everywhere.

Yes, some of that data is stored away in sediments, rocks and soils but most of it is imprinted into life itself. That 5 billion years of data is in all our’s DNA, the DNA of plants, of bacteria/virus and of fungi

And they call on it constantly.

While there are more bacteria in one handful of ‘dirt’ than there are, there ever will be, humans that ever exist even in anyone’s wildest dreams.

You might have 10,000 billion organisms and each their DNA is

How Bigand all operating in real time parallel – Just What is the computational power of that?And ‘some people’ think they can replicate that with a few fancy formulae running inside a (literally smoking red-hot) box full of Nvidea chips!

Jeez, What Went Wrong Here

It is the (oft?) quoted saying about

Quantum MechanicsEspecially that if someone, anyone, ever gets to their feet and asserts that they understand Quantum Mechanics and ‘

have all the answers‘….…….only one thing is certain…..

viz:You are in the company of a self-deluded fool who never even understood The Question, let alone knows any answersThere is actually another certainty there – that if you follow their advice/guidance – The Only Thing that will happen is a complete and utter disaster.

wrap up warm folks – your kindergarten teacher lied > Deserts are Cold Places

I notice that Nick Stokes has answered several comments but none by

Gerald Browningmade in response to Stokes’ original comment.Anyone know, as opposed to a guess, why that is?

Politeness!

That’s why I didn’t interrupt Gerald while he was dismantling and destroying Nick’s comments. 🙂

Would you be so kind as to describe in simple terms exactly what dismantling Gerald is doing? From my reading, Nick is exactly correct – the paper does not appear to “challenge” the basis of GCMs, it seems to be proposing a hypothetical new framework for modeling over limited geographical areas.

So proving mathematically that the hydrostatic system is not the correct dynamical system does not challenge the basis of the Gcms?

Alana,

Has Nick ever published a theoretical mathematical manuscript in a reputable mathematical journal? Or has he only published numerical model results by misusing

numerical analysis theory as developed by famous mathematicians such as Peter Lax? Kreiss has shown that if numerical turbulence models use the wrong size or type of dissipation (common practice in many numerical models) then the model will not reproduce the correct continuum solution. Kreiss did this for the full nonlinear turbulence equations. Can you cite a similar ground breaking result by Nick?

Jerry

Jerry, I don’t pretend to have requisite knowledge to evaluate your paper myself, much less the entire field of climate modeling. My reading of the paper is that you have not overturned the basis of modern GCMs, but rather argue for a possible, but theoretical, improvement to regional (or large scale) modeling. i.e. you’re saying, “this might be a way to do this specific thing better,” which is not the same as saying, “existing climate models are not skillful,” which I think is the takeaway the article title implies. Modern GCMs are skillful in reproducing substantive elements of the global climate system such as circulation, and show skill in reproducing historical forced trends in variables such as surface temperature. I don’t think anyone is under the illusion that they cannot be improved.

Nick’s comments are aligned with my understanding and seem extremely reasonable.

“and show skill in reproducing historical forced trends in variables such as surface temperature.”roflmao….

How can you say that with a straight face. !!

Hilarious.

That just PROVES that are wrong,because the surface temperatures they use are fabrications massively tainted by urban warming and agenda-driven “adjustments”.Show us a model that gives NH hindcasts with 1930/40s temperatures similar to now.

There is ZERO skill in hindcasting, even to faked data, when you have so many parameters you can mal-manipulate.

“Nick’s comments are aligned with my understanding”Then they are provable WRONG !

Only because they are tuned to reproduce the past 100 -150 years. How accurate are they compared to the last 2000 years?

Even if the Minoan, Roman and Medieval warm periods were regional, do the models show that?

Unless you understand all of the skeletons in the closet of climate models, you cannot understand their uselessness. When I was at NCAR I coded a GCM and we used to laugh at the output it produced, I.e., how unrealistic it was. Warren Washington spent untold computer hours just trying to get his model to run for a year. Unrealistically large dissipation and considerable parameterization tuning was required.

None of us believed the result.

Since then I became trained in numerical analysis and partial differential equations and can easily point out all the mistakes that have been made, including the use of the wrong system of equations.

“Has Nick ever published a theoretical mathematical manuscript in a reputable mathematical journal?”Yes, indeed. Here are a few.

As I thought, you are a modeler and not an expert in analytic partial differential equations. Kreiss is and is famous in that field. He is known for his theories on well posed boundary conditions for continuous and discrete hyperbolic equations, for his theoretical work on ordinary differential equations and his estimates for the smallest scales in incompressible turbulence just to name a few. He has held positions at Upsala university, NYU, Cal Tech and UCLA. And you have the odacity to denigrate him in your initial comment. Speaks highly of your character.

I said nothing in denigration of Kreiss, who is, sadly, no longer with us.

I note though that your paper, published three years ago, has not been cited in the mathematical literature.

The hell you didn’t. You implied that his theory for hyperbolic systems with multiple time scales never worked without reading my manuscript

that shows it is bang on. Shame on you.

I said that it hasn’t been made to work. Has it? An actual GCM?

Your ms is behind a fierce paywall.

And that means the theory is wrong? Your devious comments are a hoot. The modelers have hidden the serious problems with the hydrostatic system that deviates from reality by 50% in 36 hours using as hoc physically unrealistic numerical tricks. Scientists such as Peter Lax and Heinz Kreiss move science forward while scientists like you attempt to prevent progress by supporting the outmoded status quo.

Most scientific publications are behind a paywall. Does that mean you don’t read any publications? Give me a break.

Nor did I expect it to be. Do you think the modelers would reference a manuscript that proves that they have made a serious mistake? I published the manuscript to complete the work Heinz and I did together knowing that in the future someone will discern that he and I

were aware of the truth. You might check all the references to his work

as compared to yours. BTW Heinz wanted to publish a book with me

about the theory of hyperbolic systems with multiple time scales, but I quit out of disgust with the dishonesty of the modelers before we could do that.

He should follow Dr. Frank’s example and cite himself. Then, they would at least each have

one.https://scholar.google.com/scholar_lookup?title=LiG+Metrology%2C+Correlated+Error%2C+and+the+Integrity+of+the+Global+Surface+Air-Temperature+Record&volume=23&doi=10.3390%2Fs23135976&journal=Sensors&publication_year=2023&author=Patrick+Frank

https://scholar.google.com/scholar?cites=14649038756477923777&as_sdt=5,26&sciodt=0,26&hl=en

And/or else, he could ape Dr. Frank by just jetting citations as an article metric. Even though it is listed as such by Dr. Frank’s vanity publisher…

So truth is proportional to number of citations?

Is this what you are claiming?

Your sophistry is on par with Nitpick’s.

Nope. It’s certainly

possiblefor an uncited paper, in modern times, to be relevant/important. And it’s alsopossible, for a well cited paper to be FOS.The only problem there is that the

probabilitiesof either challenge the capability my engineering spec PC to calculate….Now read my new comment after Nick Stoke’s first comment of the post.

Nice backpedal, blob.

There is a real scientific comment

When you cannot disprove the mathematics, try to smear the manuscript or author in some other way.

The CSIRO climate models are some of the worst of the worst.

Not something to brag about . !

yawn.

Your understanding?? matters not. !

It is irrelevant to rational discussion.

Bnice2000,

Please read my new post right after Nick’s initial post on this this site

Wow! Complicated stuff! And some people actually think climate science has definitive answers?

By constraining the 3D problem to 2D, you are headed in the right direction.

The n-body problem cant be solved in 3D. It can in 2D.

GL browning spouted this stuff on climate audit for years until

forced to debate Judith Curry and Lucia.

wherein he was embarassed.

TLDR:

we all are forced to rely on weather models to forecast hurricane tracks which cliff Mass argues

are great.

weather models form the basis of climate models.

climate models have reliably forecasted Global scale, decadal scale climate states.

skepticism is broken.

You really are a clown, mooosh

Weather models are good for a few days, when you have all the detailed local data, and can keep checking against what is actually happening.

Cliff showed that hurricane forecasts differ greatly even a few days out, and that one out of several seemed to be doing a good job, that particular time.

Climate models have not reliably forecast anything, ever.

They do not have that ability.

You are an embarrassment.

Your mind is broken.

Pa Frank has shown that a simple linear equation perfectly reproduces the results of the climate model ensemble out into the future. The uncertainty associated with the linear equation also gets so large so quickly that it is impossible to *know* what will happen in the future. The climate models have the same issue. Their uncertainty becomes so large after even just a decade that no one knows for sure what is going to happen.

It’s like the fortune teller at the local carnival using a cloudy crystal ball. The fortune teller *sees* the future in the cloudy stuff. The climate modelers *see* the future in their models even though its so cloudy it’s actually impossible to see anything specific. It’s why the predictions of the climate models are so much like the predictions of a carnival fortune teller! The predictions never come true!

Well Judith Curry was one of the reviewers of the manuscript and admitted she learned something new. Oops.

So weather models are based on the wrong set of dynamical equations as are climate models. Ignoring accumulating truncation and parameterization errors,

a climate model must assume it is based on the correct set of dynamical equations (which it is not), that the numerical approximation accurately approximates the correct continuum system (which it doesn’t because it is the wrong dynamical system and the numerical accuracy is destroyed because discontinuous forcing is used necessitating excessive dissipation to keep the model from blowing up) and the errors in the parameterizations

are less than the truncation and continuum errors.

“Well Judith Curry was one of the reviewers of the manuscript”And moosh makes a mosh of himself … yet again !!

So Funny ! 🙂

I published the manuscript and went thru the review process by two prominent atmospheric scientists just to stop comments such as yours. The mathematics speaks for itself . The numerical example is just to illustrate the theory.

The original paper “The unique, well posed reduced system ..” uses a leapfrog method to solve multiscale equations. The leapfrog method is very fast, but it magnifies even the tiniest imprecision in initial conditions exponentially. That’s the reason why professional equation solvers shun it. I wonder how how your unique system handles it.

When a physical solution is growing as in the example in my manuscript, the extraneous solution in the leapfrog method is decaying. I prefer to use that method when possible so that there is no implicit numerical damping present.

True, in part. The leapfrog method generates two extra terms to the original difference equation, both originating from the error in initial conditions. One is decaying, and the other one is growing exponentially.

The leapfrog time difference has two solutions. Both have magnitude 1 so it is a neutral method. One of the solutions oscillates rapidly in time and is often suppressed using a time filter that tends to reduce its accuracy. But when the method is used to approximate a growing physical solution, one solution is greater than 1 (thus approximating the growing physical solution) and the other less than 1 and decaying.

Just to clarify. The product of the modes is always -1. For a hyperbolic system they are +1 and – 1. For a system that has exponential growth, one is positive and greater than 1 and the other is negative and less than 1, but the product is still -1.

Before commenting please read the first comment after Nick Stoke’s initial comment.