'Analysis' is what we call an attempt to represent the state of the atmosphere/ocean/sea ice/... given a set of observations. One such analysis is the global surface air temperature analysis. That, then, spawns efforts to find a global mean temperature, or global mean temperature trends, and so forth. Several of the recently-added blogs aim to study that, in one way or another. That particular one is not my interest in two different ways.
One is, I'm an oceanographer, so I'm more interested in a sea surface temperature (sst) analysis. The other is, most of the interest in the surface air temperature analysis seems to come from its role as a detector of climate change. On the scale of things, I consider this the second weakest climate change indicator. The only thing weaker, in my view, is the so-called 'Hockey Stick'. But enough raw opinion.
Regardless of what it is you're trying to analyze, and what your reason for doing so is, there are quite a few ways of setting about doing so objectively. The fact that there are many makes this the first of something like eight notes I'll be writing up on the idea. There turn out to be many different ways of making an analysis, each objective, each with strengths, each with weaknesses.
The simplest one, if not as simple as you might think, is the 'drop in a bucket' method.
First, a bit of language. We typically divide the earth's surface in to a bunch of boxes/cells. Also typically, they're some number (or fraction) of degree latitude by so many degrees (or fraction) of longitude. Depending on which sst analysis I'm looking at, a cell can be anything from 5 degrees on a side to 1/100th of a degree on a side.
The basic idea for drop in a bucket is very simple -- if you have temperature observations in a cell, you use them to find the temperature of (analyze) that cell. If you have no temperatures in a cell, then you have no analysis for that cell. So 0 observations in a cell is very easy -- you report no analysis. 1 observation is also very easy, your analysis temperature is the temperature from that one observation.
But what about having more than 1 observation in a cell? The very simplest thing to do is just average all of them. We just blindly treat all observations as being equally good. Hmm. That sounds a bit problematic. Some observing methods are better than others, after all. The quality of the observations is described by the standard error, which is the standard deviation between the true value and the observed value -- computed after you have many such observation to truth comparisons.
For typical drifting buoys and satellite methods, this is about 0.5 degrees. For ships, let's say 1 degree. This being science, of course I mean degrees C. One could pursue this to substantial complexity, as it's probably the case that every type of buoy has a somewhat different standard error, different ship observing methods have different standard error, and the different satellites and satellite methods have still other standard errors. Life is probably no simpler for surface air temperature observations; and for rain it's even harder.
For the sake of illustration, let's consider a cell with a buoy that observed a temperature of 25 C, and a ship in the same area that observe 26 C. In blind averaging, we'd treat them as equal, and give our analysis as 25.5 C. But ... the buoy is a better observer than the ship. Shouldn't our analysis be closer to the buoy? Maybe we should just throw out the worse observer?
Probably not. The observations have a distribution of likelihood (which is not the vertical axis! beware!) around the value they report. For each observation, the most likely value is what is reported. But those standard errors give us a curve of likelihood. The ship is in orange, the buoy in blue, and a third thing in black:
The ship (orange) has the large standard error, so it's a very wide curve. It's odd, but true, that the ship observation is more in agreement with a claim of 23 than the buoy observation, even though the buoy observation is colder. The thing is, the buoy observation is less uncertain.
The third curve is where we get to the creative part. What it is, is that I've multiplied the likelihood curves for the buoy and the ship. The result is a joint likelihood. The peak of the curve is our point of maximum likelihood. It's a temperature of 25.2 C. You can, in principle, do this sort of thing graphically regardless of how many observations you have. It gets tedious and ugly, of course. That's why we invented mathematics. In this case, calculus. (See bottom for the gory math details).
We also see that our resulting estimate, the maximum likelihood curve, is narrower than the two original ones. The more observations we have, the better our resultant estimate -- even better than the original observations. This is the same sort of thing we saw result in How can annual average temperatures be so precise?
Our estimate based on considering the quality of the different observing platforms is 25.2, rather than the 25.5 of treating them as equally good. When we're looking to deal with climate change and detecting small differences over time, it's obviously important to pay attention to this sort of change. If you change from one to the other, there's a 0.3 C change -- not because of climate, but because you changed your methods for filling cells. (Not a mistake I think anyone has made, but a heads up if you are getting started.)
The general term for this sort of thing (treating some observations as better than others) is 'weighting'. We give more weight to some sources than others. I'll give the exact method at bottom for the mathy folks. Different methods that we'll be getting to will do their weighting in different ways.
Now let's go back to thinking about the general approach. We select boxes of some size, and then if there's an observation in the box, then we say that we know nothing about what's going on in the box. There are about 5000 drifting buoy observations per day. If the cells are 5 degrees on a side, and the buoys are distributed randomly, there are about 3 observations per grid cell and we are doing pretty well. On the other hand, if our cells are 1/100 degree on a side, then probably only one cell in 80,000 has an observation. How did we go from knowing most of the globe pretty well (3 observations, obs, per cell on average) to knowing almost nothing? On the other hand, for every cell you say you know something, you definitely have at least one observation in support, and you haven't made any assumptions (at least not past selecting the cell size).
But suppose you are running a numerical weather prediction (NWP) model. You can't accept an 'I don't know' for starting your prediction. Consequently the people involved in NWP were early people to develop more advanced methods. That's next.
Methods (so far):
1a) Drop in bucket, blind averaging
1b) Drop in bucket, maximum likelihood averaging
Gory math details:
To find the maximum likelihood estimate for temperatures given N observations, multiply together your N curves, each of the form exp( - (T-T_i)^2/s_i^2), and find the T for which this is a maximum. T_i is the ith observation, s_i is its standard error.
The maximum likelihood value is sum(T_i / s_i^2) / sum(1 / s_i^2).
I'll leave it for those interested to compute the standard error of the maximum likelihood estimate itself.
01 September 2010
Subscribe to:
Post Comments (Atom)
6 comments:
I realize it's a bit off-topic to the direction of your post, but could you elaborate on what you mean by the hockey stick being the weakest climate change indicator? E.g., does your comment imply that timings of plant blooms and animal movements are stronger indicators of climate change than surface temperatures in the hockey stick and sea surface temperatures?
thanks,
jg
jg:
You have the right of it. I think the best indicators of climate change are things directly, or at least relatively directly, responsive to climate and its changes. Glacier mass balance is directly a sign of climate. It's a sign of more than one part of climate -- both temperature and precipitation -- but climate. Animal ranges, flower bloom times, borehole temperatures, and so forth are other things directly representing something about climate. (Plus sea ice extent, not that I'm biased or anything. :-)
Trying to analyze proxies towards getting a temperature ... well, that's indirect. Tree growth itself, for instance, is a more or less direct climate indicator -- but affected by more than one climate component. The problem, in my view, is when you try to extract a clean temperature-only signal. That's a couple of steps removed from what you're directly observing, so is a weaker data source.
The surface temperature record has its problems with siting, continuity, contamination, changes of instruments, changes of siting, etc..
The people working on either sort of data know of these issues, of course. So the resulting data sets are reasonably good. But there's something of a silk purse and sow's ear problem inherent*. But, push come to shove, I prefer those other sorts of data for deciding that there's climate change, and the nature and degree of that change.
*For non-native speakers: There's an English saying "You can't make a silk purse out of a sow's ear." The point being that if your starting product is poor, the final product can only be so good. A lot of science turns on discovering that your starting ingredient is actually a lot better than other people realized.
My other idea is that any single source is not conclusive. It is the fact that the overwhelming majority of indicators point in the same direction that is what drives my conclusion. Any one of them could be overtaken by data source errors/problems/messiness/details. But, since they all have different observational problems, the chances of so very many of them all pointing in one direction due to data problems are quite low.
Tip: You should consider using an online LaTeX equation editor: Example
One is, if you are an oceanographer, or a retired one
The surface temperature record has its problems with siting, continuity, contamination, changes of instruments, changes of siting, etc.. thats your words
the ocean as a natural "freezer"
a cold storage area
and the cold currents are more significant than
sea surface temperature's
this kind changes of sub-photic waters temperatures can affect the cryosphere in particular the small and large shields of ice precariously attached to the ice shelf,
the last ice cubes that calved, creating ice islands with dimensions about 4 to 5 km in the ilullisat area with ice thicknesses above normal due the influence of melting water below
glacier tongues that, haven't been measured by
radio-echo sounding are the real issue
turnover of nutrient rich cold water that are hapening in abnormal high und low latitudes
and algal blooms
for instance Milne Shelf measurements, assuming average ice thickness and no real radio-echo sounding measurements
give fake ice volume's
you see the point
It's humorous that the animals and plants are better at sensing subtle changes in climate than we are with all our brainpower and instrumentation.
Of course, they know nothing about "global mean temperature anomaly"
In fact, many of them sense changes from essentially one spot on earth (and without internet!)
rab: thanks for the word on the equation rendering. I use LaTeX for my papers, but hadn't found an easy solution for putting them in to blogger.
Horatio:
I'd put it a little differently. With our brainpower, we can often find a way to fool ourselves. Doubly so if there's a political or religious reason to do so. Squirrels don't care about your politics, nor do acorns. So what they have to say is more obviously free of those biases. Of course it's still someone with our brainpower going out to see what the squirrels and acorns are doing.
Post a Comment