How is it that we go about building climate models? One thing is, that we would like to build our model to represent everything that we know happens. If we could actually do so -- mainly meaning if the computers were fast enough -- life would be simple. As usual, life is not simple.
I'll take one feature as a poster child. We know the laws of motion pretty well. I could write them down pretty easily and with only a moderate amount more effort write a computer program to solve them. These are the Navier-Stokes equations. On one hand, they're surprisingly complex (from them comes dynamical chaos), but on the other, they're no problem -- we know how to write the computer programs to do conservation of momentum. Ok entire books have been written on even a single portion of the problem. Still, the books have already been written.
The problem is, if you want to run your climate model using what we know is a representation sufficient to capture everything we need to do, in order to represent everything we know is going on, you need to have your grid points only 1 millimeter apart. That's ok, but it means something like 10^30 times as much computing power as the world's most powerful computer today. (A million trillion trillion times as much computing power.)
What do we do in the mean time?
What we do in the mean time is realize that there are simpler ways to represent almost exactly, or at least fairly well in an average sense, what is going on -- even though we don't have computers emormously more powerful than currently exist. This is where we encounter 'parameterizations'.
A parameterization is where we try to represent all those tiny features that we don't (currently) have the computing power to represent directly. In an ideal world (which includes infinite computer speed), we wouldn't need any parameterizations.
Since we live in the real, and imperfect, world, we deal with the fact that we don't have infinitely fast computers. But we do have amazingly fast computers which are able to deal with problems that we never used to be able to consider. Still, not infinitely fast, or even as fast as we already know enough science make use of.
Now, one of the things which leads to dynamical chaos is that momentum advects (fast winds here shove themselves over there) as well as dissipates (tiny blobs, the 1 mm I mentioned, whirl about fast enough that the energy gets turned in to heat, and the air comes to a stop). We know this. And give us a computer that can cover everything down to the 'dissipation scale' (the 1 mm or so), we can do a fair job of running a model that predicts what you see. (Some interesting exceptions exist, but that's for a different note.) The problem, for climate modeling, is that the volume we can manage this way is the size of a fish tank. And not necessarily that large a fish tank (about 1000 L, 250 gallons).
On the other hand, we do have simplified representations of what happens at that smaller scale. And they do a moderately good job (bulk aerodynamics) to pretty good job (Monin-Obukhov similarity theory). And, of course, there are more elaborate methods, with correspondingly better behavior. So, again, what do we do in the mean time?
One thing is, we take our parameterization and see how closely it lets us match what we observe about the world. For turbulence parameterizations (which is what I've taken at the moment) this means observing how the wind changes with distance above the surface, and how it changes as you move across a new surface. That means, for instance, taking a field with known characteristics (freshly plowed, let's say) and putting wind measuring devices (anemometers) on towers along the prevailing wind direction. Then look at how the wind speed changes as you move down the wind path, and as you move up from the surface. If your parameterization has some tunable numbers, you tune them to get the best representation of these observed winds.
What we don't do is tune (change) those numbers until we get some particular climate change sensitivity. On the one hand, it's probably impossible computationally (the globe is tremendously larger than the field you plowed). On the other hand, it is also dishonest and undesirable. Undesirable because it means that your turbulence parameterization wouldn't be representing turbulence nearly as well as it could. (I'll take it as given that the dishonest aspects are obvious.)
This makes for one aspect of building a climate model -- you can't represent everything that you know exists, so you simplify some of it. But you do your best to represent that aspect (turbulence for now) as well as you can -- in its own terms. Turbulence parameterizations try to represent how heat, moisture, momentum, get transported near the earth's surface (among other places), so you check them by how well they do that job.
How well do the models do the job with turbulence? According to Wikipedia there is a closure problem. http://en.wikipedia.org/wiki/Turbulence_modeling#Closure_problem What exactly is that?
ReplyDeleteI guess Reynold's stress arises from the real-world small turbulences located outside the area whose behavior is being calculated. This would lead the model turbulences being too stable due the non-linear term. My guess is the value should be stated in the definition of the grid-box to get, f.e. the hurricane-like-vortices present in many models to die out neatly wrt the proportion of the continents in a gridbox. The same would apply in the oceans, the vertical mixing of waters has been hard to model.
ReplyDeleteThe poetic version of explanation is from Lewis F. Richardson, inventor of numerical weather prediction, and author of some important papers on turbulence:
ReplyDelete“Big whirls have little whirls,
That feed on their velocity;
And little whirls have lesser whirls,
And so on to viscosity.”
The closure problem and Reynolds stresses arise from the same dynamical approach. Namely, you pretend that the fluid is moving with some average velocity. On top of that, you have some whirls (turbulence). The fluid doesn't know that you're thinking this way, nor care. The particles all move around following Newton's laws, which don't distinguish between average and whirl.
If you have just the right kind of flow -- one where there's a clear separation between the mean flow and the whirls -- then Reynolds averaging is pretty accurate. This is almost never truly the case for the atmosphere or ocean, hence the 'closure problem'.
What you do with Reynolds averaging is throw (mathematically) mean+whirl in to the dynamical equations and average over some distance, large enough that the average of 'whirl' is zero. Consider stirring coffee. You've got a whirl in the cup, so coffee on one side is moving away from you and coffee on the other side is moving towards you. Averaged over the cup, the coffee has no motion. If you do this, you wind up with 2 equations of motion from the original. One for the mean flow, and one for the whirls. The mean flow equation looks much the same as the original equation, but has an extra term due to whirls hitting other whirls. These are the Reynolds stresses.
You either make some assumption ('closure assumption') about the whirl on whirl activity, or you continue the process another step -- put smaller whirls on top of the first whirls and add this equation. How many times you expand the equations is the order of your 'closure'. Reynolds did first order (mean + whirls). By 1970s, there were 2nd or 2.5nd order closures in use.
imho, much of the progress in the last 20 years on turbulence has been to take new approaches rather than to elaborate on this closure approach. The reason being, closure works best (at all) if there is some distance (like the size of my coffee cup in the above example) that you can average over and get rid of all whirls. But the atmosphere and ocean don't do that.
The quality of the ocean diffusion, well, mixed layer boundaries are difficult -- always have been. But the quality of models/simulations has improved greatly in the last 20 years.
More generally, turbulence as a purely dynamical process (as opposed to ocean and atmosphere which also involve thermodynamics -- temperature and density variations are important) is now sufficiently well-understood that quite a lot of work is now done via computer simulation rather than wind tunnels.
Thank you sir, for the more exact and quite clear explanation of the Reynolds closure problem. I could imagine a large part of the small whirls would be a result of the topography of the non-fluid medium (solids) in the vicinity of the calculated point (area), clearly defined boundary layers (such as halocline or tropopause) of course an another source, and a third source would be gradients of unevenly spread mixtures of fluid materials. Creating an equation taking all these into account would be fiendish at best, I guess. :-/. But there's no way all of these would be just a matter of adding resolution to the model, I think. It could even be that on some areas one could lose the forward velocity of the fluid all togerther and get a whole bunch of new mathematical problems. (Mostly guessing here, though.) Impossible problems are impossible for a reason, who's to say when an electron hits the electron shell of another molecule and which way both molecules will spin after that.
ReplyDeleteThank you for the explanation. But I am still puzzled. What is it your are closing? Is it the net velocity or are you trying to calculate the additional energy produced by the whirls?
ReplyDeleteThank you for the reply, but I am still a bit confused. What is it that closure addresses? Is it the net velocity or the total energy?
ReplyDeleteBTW, the sea ice predictions are needed by Friday http://www.arcus.org/sipn/sea-ice-outlook/2015/june/call
I think mathematically, calculating the ever smaller whirls is an iterative process, and 'closure' is just a decision on where to stop the iteration on given computing resources, could be wrong though.
ReplyDeleteThe closure problem is explained in Turbulence, Closure and Parameterization, which I partly sum up by saying:
ReplyDeleteTo “close the equations” when we have more unknowns than equations means we have to invent a new idea.
Why do we get more unknowns than equations? Due to averaging to get useful statistics.. anyway, harder for me to explain it in less words than in the article.