Monday, September 29, 2008

24 Hour Contests

My wife, Vickie, and I took part in 24 hour contests last weekend. Hers was the more formal -- a 24 hour short-story writing contest. They give a topic and the word length at noon one day, and the story is due by noon the next day. Neither of them really suited her; the length was only 850 words, which is quite short to write a good story in. But I believe she succeeded. And she discovered some more about her writing, and what she can do. So all to the good. We'll find out in a month or so what the judges thought.

My own contest was a bit more unusual, not least because there were no other contestants, no judge, and no rules. Still, it occurred to us that while Vickie was doing her 24 hours focused on writing, I could also do 24 hours focused on writing. But in my case, a science paper. Continuing with a paper I've already started and trying to finish it in 24 hours would not have been in keeping with the spirit of her contest, which was focused on novelty. I have a ton of ideas, though, lurking in the back of my mind at any given time, and several feet of them in cabinets if, for some reason, I don't like ones that are leaping to mind. So my challenge was to take one of them and give it a good hard run for 24 hours.

I didn't finish a paper, though I did get 2 good pages written. The writing was done in the first hour. (I'm a fast typist and have been thinking about this idea off and on for a few years.) Then in to the charge at data. Or, rather, the slow and careful sneaking up on data and hoping that it didn't bare its fangs and shred my idea in the first few seconds of contact.

After my 24 hours, the notion was still intact and, if anything, looking better. Didn't finish the paper, but no surprise there as actually there are quite a few papers to come from this idea. But I did make good progress on getting data and testing that the idea held up against some reasonably good counter-tests. More detail to come later, once I get a little farther. But things to be coming up here before then are the North Atlantic Oscillation, Arctic Oscillation, Pacific-North American, and Antarctic Oscillation (NAO, AO, PNA, and AAO, respectively).

Wednesday, September 24, 2008

Atmospheric Lapse Rates

The question place is serving its purposes, one of which is to bring up points that warrant some discussion in a fuller post. At hand is the atmospheric lapse rate, which Bart brought up by way of his question:
2) I read elsewhere (I can only research what I read, I don't really have the ability to check much of this for myself) that models assume a constant lapse rate. Chris said the lapse rate is required for the greenhouse effect, but from everything I look at people only catgorize in "Dry" or "Moist" cases, but doesn't it vary everywhere over the globe?


There are models, somewhere, that assume anything one could mention, so I suppose there are some which assume the lapse rate. As you correctly notice, though, lapse rates depend on conditions, and those conditions vary over the globe. A serious climate model couldn't assume the lapse rate. And, in truth, they don't. More in a moment, but something to look back at is my description of the 16 climate models

Let's start with the lapse rate itself. What it is, is the change in temperature with elevation. Through the troposphere, the lapse rate is a negative number (cooling with elevation). In the stratosphere, it turns to zero and then positive (warming with elevation). In the mesosphere, we go back to cooling with elevation. This is a strictly observational issue. You can find temperature profiles, say from the Standard Atmosphere (a specific thing; Project: take a web look for it and see what they look like; they've changed through time, by the way). And then find the temperature difference between two levels, and divide by the elevation difference. That'll give you the average lapse rate. You can also find, radiosonde soundings of temperature. (I'd start my search for this project at the National Climatic Data Center.) This will let you see how the lapse rates vary day to day at a location, and between locations.

On the theoretical side, we go back to Conservation of Energy. We start with a completely dry (meaning no water vapor) blob of air, in an insulating bag that prevents it from radiating, conducting, or convecting energy to or from the surroundings. Then we lift it through the atmosphere. As we do so we'll find that its temperature drops. This happens because our blob does work in expanding. The energy for that work comes from its own thermal energy store. We can compute exactly how much the air would cool under this circumstance. It is about 10 K per km near the surface of the earth. This is what we are referring to in talking about the Dry Adiabatic Lapse Rate. The 'adiabatic' refers to our insulating bag around the air blob.

The polar regions, particularly the Antarctic plateau, are not bad approximations to that situation. But most of the atmosphere has fairly significant amounts of water vapor. We start, now, with a slightly different bag. It still prevents heat to be added or lost to the bag from outside. But now there's a second energy source inside the bag. Water vapor can condense, and when it does, it will release energy. We take the approximation that all the heat energy goes to the gases in the bag, and that the newly-formed liquid water is immediately moved outside the bag.

Now when we lift the bag, things go a bit differently. Let's start with air at 70% relative humidity, a typical global mean value. As we lift the air, it first acts 'dry', so cools at the about 10 K per km rate. But after a while, we have cooled to the point of being at 100% relative humidity. When we start to lift any further, water starts condensing and releasing heat. The condensation only happens if we're still cooling, so it can't reverse that tendency. But it can greatly slow the rate of cooling. This gives us a Moist Lapse Rate. Note that I dropped 'adiabatic' from the description. Since material is leaving the bag, it isn't an adiabatic process any more. It is pseudoadiabatic (a term you'll see) -- almost adiabatic, as the loss of mass isn't large. But not entirely adiabatic.

As a typical ballpark value, we take 6.5 K per km as the moist lapse rate. But this obviously will depend a lot on how much water was in the bag to begin with, and the temperature. If we start with a very warm, saturated, bag of air, then the lapse rate can be even lower than the 6.5 K per km. If we start, though, with a cold blob of air, even if it is saturated, we are still close to 10 K per km lapse rate. The thing is as we get colder, there's less water vapor present, which gives less condensation, then less heating. Consequently even in the tropics, the lapse rate heads towards the dry adiabatic value as you get high above the surface.

Whether moist or dry, the lapse rate computed this way is an idealization. In the real atmosphere, radiation does move energy around, and blobs of air do mix with each other (even when rising). Still, it's derived from a strong scientific principle (conservation of energy), and it turns out to give us good ideas (in reasonable accord with observation) about what the atmosphere should look like in the vertical.


For the modelling, let's think back to the 16 models. First, many of them are never used, so we'll ignore the longitude-primarily models. That leaves us with the 0 dimensional model I've already given an example of, and there's not even the opportunity to impose or even make use of a lapse rate in that. The 4 dimensional model definitely doesn't assume a lapse rate -- doing so would force violations of conservation of energy. Radiative-convective models can't force the lapse rate for the same reason. For a discussion of such models, to which I'll be returning in another post about water vapor's greenhouse contribution, see Ramanathan and Coakley, 1978. As of that era, one did specify a critical lapse rate. This isn't the lapse rate that the model had to have, rather, it was a limit. If the limit were violated, something has to happen. That something is to conserve energy by mixing the layers that violated the limit. And Energy Balance Models, as I expected, don't even mention lapse rate. See North, 1975 for a discussion of energy balance models.

Either the models are too simple to know about lapse rates (0 dimensional, Energy Balance), or they compute the lapse rate (Radiative Convective, 4 dimensional). Either way, the lapse rate is not assumed before hand. It's an interesting after the fact diagnostic for the Radiative Convective or 4d models, or impossible to speak to.


One thing to do is find some better sources for you to read. I taught an introductory (freshman level) physical geology class with Lutgens and Tarbuck, and liked the text there. They have a text at that level for meteorology, but I haven't read it myself. It should be good, though. John M. Wallace and Peter V. Hobbs, Atmospheric Science: An Introductory Survey is an excellent book. In half the chapters, comfort with multivariate calculus is assumed. But the other half are descriptive/physical rather than quantitative/mathematical so should be approachable already. A second edition is now out, I used the first. Does anyone have suggestions for a good freshman level introduction to meteorology/climate?

Sunday, September 21, 2008

Excess Precision

Excessive precision is one of the first methods mentioned in How to Lie With Statistics. It's one that my wife (a nonscientist) had discovered herself. It's very common, which makes it a handy warning signal when reading suspect sources.

In joke form, it goes like this:
Psychology students were training rats to run mazes. In the final report, they noted "33.3333% of the rats learned to run the maze. 33.3333% of the rats failed to learn. And the third rat escaped."

If you didn't at least wince, here's why you should have. In reporting scientific numbers, one of the things you need to do is represent how good the numbers are. In order to talk about 33.3333% of the rats, you'd have to have a population of a million rats or more. 33.3333% is saying that the figure is not 33.3334% or 33.3332%. You only should be showing as much precision as you have data for. Even though your calculator will happily give you 6-12 digits, you should be representing how accurate your number is. In the case of the rat problem, if 1 more rat had been run, one of those 33% figures would change to 25 or 50. The changes of +17% or -8% are so large that they should not even have reported at the 1% level of precision. What the students should have done was just list the numbers, rather than percentages, of rats all along.

As a reader, a useful test is to look for how large the population is versus how many digits they report in percentages. Every digit in the percentage requires 10 times as large a population. Need 10 for the first digit (again, the psych. students shouldn't have reported percents), 100 for the second, and so on. A related question is 'how much would the percentages change with one more success/failure?' This is what I looked at with running the extra rat.

Related is to consider how precise the numbers involved were at the start. When I looked at that bogus petition, for instance, I reported 0.3 and 0.8%. Now the number of signers was given in 4 or 5 digits. That would permit quite a few more than the 1 I reported. The reason for only 1 is that I was dividing the number of signers by the size of the populations (2,000,000 and 800,000) -- and the population numbers looked like they'd been rounded heavily, down to only 1 digit of precision. When working with numbers of different precisions, the final answer can only have as many digits precision as the worst number in the entire chain.


An example, and maybe the single most commonly repeated one from climate, is this page, which gives (variously, but table 3 is the piece de resistance) the fraction of the greenhouse effect due to water vapor as 95.000% That's a lot of digits!

Let's take a look at the sources he gives, and then think a little about the situation to see whether 5 digits precision is reasonable. Well, the sources he has valid links for (1 of the 9 is broken, and one source doesn't have a link; I'll follow that up at lunch at work in a bit) certainly don't show much precision. Or being scientific, for that matter (news opinion pieces and the like). My favorite is the 21st century science and technology (a LaRouche publication), whose cover articles include "LaRouche on the Pagan Worship of Newton". The figures given are 96-99% (LaRouche mag), 'over 90%', 'about 95%', and the like. Not a single one gives a high precision 95.000%, or a high precision for any other figure. This should have been a red flag to the author, and certainly is to us readers. Whatever can be said about the fraction of greenhouse effect due to water vapor, it obviously can't be said with much precision. Not if you're being honest about it. (We'll come back in a later post to what can be said about water vapor, and it turns out that even the lowest of the figures is too high if you look at the science.)

Now for a bit of thinking on water vapor. The colder the atmosphere is, the less water vapor there can be before it starts to condense. (It's wrong to call it the atmosphere 'holding' the water vapor, but more in another post.) It also turns out to vary quite a lot depending on temperature. In wintertime here (0 C, 32 F being a typical temperature), the pressure of water vapor varies from about, say, 2 to 6 mb. In summer, it's more like 10 to 30. (30 million?! It gets very soggy here, though not as much as Tampa.) On a day that it's 30 mb here, it can be 10 mb a couple hundred km/miles to the west. Water vapor varies strongly through both time and space. As a plausibility test, then, it makes no sense for there to be 5 digits precision to the contribution of something that varies by over a factor of 10 in the course of a year, and even more than that from place to place on the planet.

Thursday, September 18, 2008

1970s Mythology

One of the more popular myths repeated by those who don't want to deal with the science on climate is that 'in the 70s they were calling for an imminent ice age' and such like nonsense, where 'they' is supposedly the scientists in climate. This has long been known to be false to anyone who paid attention to the scientific publications from the time, or even to William Connolley's efforts in documenting what was actually in the literature over the last several years. Now, William and two other authors (he's actually the second author on the paper) have put that documentation into high profile peer-reviewed literature -- the Bulletin of the American Meteorological Society. For the briefer version, see William's comments over at Stoat and web links therein. That page also includes a link to the full paper in .pdf format.

Tuesday, September 16, 2008

Sea Ice Packs

I've already mentioned types of sea ice, but that's only a bare scratch on the surface of the subject of sea ice. Another bit of vocabulary before diving in to today's sea ice: a chunk of sea ice is called a 'floe'. Not a flow, nor a sheet, a floe. Ice sheet is something quite different.

When we get a bunch of floes together, we start to have an ice pack. Three terms come up for describing a region of the ice pack (or maybe the entirety): concentration, area, and extent. Ice pack area makes the most intuitive sense -- add up the area of all ice floes, and that's the area of the ice pack. Concentration and extent are a little more removed. For concentration, draw a curve around some region you're interested in. Then divide the area of sea ice by the total area of the region bounded by your curve. Two common 'curves' used in the science are the footprint of a satellite sensor, and the area of a grid cell. The latter is what you'll see presented on any of the graphics at the sea ice sites I link to. For extent, you then take your grid and for every cell that has more than some concentration (which you'll specify), you add up the area of the entire grid cell. Extent will always be greater than area.

The usual concentration cutoff, and the one to assume if it isn't specified, is 15%. Below this, the ice is not reliably detected by the most commonly-used sensors, and it is a greatly smaller practical problem for ships. Not that ships appreciate bashing in to ice floes, but that at this concentration or lower, it can be manageable to move around them (and get out of the ice pack you were surprised by!).

The most common type of sensor to use for detecting sea ice from space uses passive microwaves. The ice (it turns out) emits microwave energy much more effectively than the ocean around it. This gives it a higher brightness temperature. Between that and some other details, we can get back an estimate of the concentration of sea ice that the satellite was looking at. A word, though, as we're coming out of summer: the method relies on the difference between ice and water. If you have ponds of water sitting on the ice floes, which can happen on thick ice such as the Arctic can have, then your concentration (area) estimate will be biased low. The extent is probably still not too bad. The reason is, by the time you're falling below 15% cover, the thick floes will have been storm-tossed enough that the ponds will have been emptied, or that it's late enough in the season that the melt pond melted its way through the ice floe and there really isn't any ice under the apparent water any more.

In looking at the NSIDC and Cryosphere Today pages on the Arctic melt, one thing to keep in mind is that one uses extent and the other uses area. Their numbers aren't directly comparable. They also differ in how they compute their estimates, in that one uses a longer averaging period than the other. The longer period gives you more confidence about the value (weather over the ice, or ocean, can give you false readings, but it moves pretty fast compared to the ice cover), but will miss some of the details in time.

More to come ... (bwahaha) But, in the mean time, questions you have about sea ice are welcome here.

Sunday, September 14, 2008

The 16 Climate Models

The number of climate models, in the sense I'm using, has nothing to do with how many different groups are working on modelling climate. I'm sure the latter figure is much larger than 16. Instead, it is an expansion on my simplest climate model, and can give a sense of what lies down the road for our exploration of climate modelling.

The simplest climate model is the 0 dimensional model. We average over all of latitude, longitude, elevation, and time (or at least enough time). Those are the 4 dimensions we could have studied, or could get our answer in terms of. The 0 dimensional model gives us just a number -- a single temperature to describe everything in the climate system. We could expand, perhaps, to also getting a single wind, humidity, and a few other things. But it's distinctly lacking in terms of telling us everything we'd like to know. It fails to tell us why the surface averages 288 K, instead of the 255 K we see as the blackbody temperature. But it does get the blackbody temperature a start.

There is also only one 4 dimensional model -- where you include all 4 dimensions: latitude, longitude, elevation, and time. These are the full climate models, also called general circulation models (GCMs), atmosphere-ocean general circulation models (AOGCMs -- the original GCMs only let the atmosphere circulate), and a few other things. These are the most complex of the models.

But there are 14 more climate models possible: 4 one dimensional, 6 two dimensional, and 4 three dimensional.

In one dimension, we have the four which let 1 dimension vary, only:
  • time
  • elevation
  • latitude
  • longitude

Something quite close to the simplest model can be used for the time-only climate model. We would then let the earth-sun distance vary through the year, solar constant vary with the solar cycle, and albedo ... well, that would be a bit of a problem. As we've still averaged over all latitudes and longitudes, however, this model wouldn't tell us about why high latitudes are colder than low latitudes, or why land on the eastern side of oceans is warmer than land on the western side, or ... a lot. Still, it would take us another step of complexity down the road to understanding the climate system on global scale. This sort of model isn't used much professionally, but it can be a help

In elevation only, we'd (we hope) be able to look in to why the temperatures in the atmosphere do what they do -- falling as you rise through the troposphere and mesosphere, even or rising in the stratosphere. This class of models is known as the Radiative-Convective models (RCM). Namely, they include radiation and convection. The most famous early model of this sort is by Weatherald and Manabe, (1967?). We'll be coming back here.

In latitude only, we'll start being able to see why the poles are colder than the equator. Budyko and Sellers, separately but both in 1969, developed models like this. They're called energy balance models (EBM). They start with our simplest climate model, but applied to latitude belts on the earth. First you pretend that no energy enters or leaves the latitude belt except through the top of the atmosphere. Same thing as we said for the simplest model, except we applied it to the whole earth. You then compute the latitude belt's temperature, and discover that the tropics would be much warmer than they are, and the polar regions would be much colder. We're not surprised that we get the wrong answer here, but the degree of error then tells us by how much and where this 'no latitudinal energy transport' approximation is worst. You can then add the physics of 'heat flows from hot to cold' and get to work on how the climate in your model changes due to this fact.

The 4th one dimensional model, I've never seen anyone use -- a model in longitude only. This dimension is much quieter than the other two spatial dimensions. In the vertical, global average temperatures vary by something like 100 C in something like 10 km. 10 C/km; we'll get to exactly how much, where, and why, later. In latitude, temperatures vary from 30-40 C in low latitudes to -40 to -80 C in high latitudes (poles), so rounding again, about 100 C, but now across 10,000 km. About 0.01 C/km. In longitude, after we average over all year and all latitudes, ... there isn't much variation. As an eyeball matter, I'd be surprised if it were more than 10 C. (Project: Compute it. Let me know your result and sources. I may eventually do it myself.) This would be not more than 10 C, but still across 10,000 km or so, so something like 0.001 C/km at most (average absolute magnitude).

So our 4 models can be sequenced in terms of how much variation they get involved with, and, not coincidentally, it's something like the order of frequency I've seen the models in the literature:
  • Elevation -- Radiative-Convective Models (RCM) -- 10 C/km, 100+ C range
  • Latitude -- Energy Balance Models (EBM) -- 0.01 C/km, about 100 C range
  • Time -- (not common enough to have a name I know of) -- a few C range, seasonally
  • Longitude -- (never used that I know of) -- 0.001 C/km or less, a few C range

The 6 two-dimensional models are:
  • time-elevation (an expanded Radiative-Convective Model)
  • time-latitude (an expanded Energy Balance Model)
  • time-longitude (I've never seen done as a model, but Hovmo"ller diagrams do this in data analysis)
and then to ignore time, and take
  • elevation-latitude (a cross between Radiative-Convective and Energy Balance)
  • elevation-longitude (I've never seen as a model, but it's not unheard of for data analysis)
  • latitude-longitude (I've never seen as a model, but common for data analysis)
The three that don't involve longitude are (or at least were) relatively common for models.

In 3 dimensional modelling, we are back down to 4 models, as for 1 dimensional. This time, though, it's a matter of what we leave out:
  • time (keep latitude, longitude, elevation; not common for models)
  • longitude (keep time, latitude, elevation -> the straight combination of RCM and EBM; most common of the 3D models)
  • latitude (keep time, longitude, elevation)
  • elevation (keep time, latitude, longitude)

And then we have kitchen sink, er, 4 dimensional, modelling.

A question I'll take up later is why we would run a simpler model (1d instead of 2d, 3d instead of 4d) if we could run the more complex model. Part of the answer will be that there's more than one way to be complex.

Friday, September 12, 2008

Shared Knowledge and Sources

Since I provided a demonstration myself of source fallibility in my last post (gas compression technology), time for a bit more on sources, knowledge, and whom to trust how much about what. On the least interesting level, it was just a matter of an unsupported statement by someone speaking out of his field of expertise. That's the least reliable sort of statement you can have. The more important or interesting the topic is to you, the less confidence you want to put in the statement. That's true whether it's me talking about old technologies (you have no reason as yet to think that I know much about that area), or engineers regarding climate change, or ....

As I've encouraged, the first comment/correction included a source. More or less reasonable source (vs. quoting some other blog by some other nonspecialist in the topic) beats an unsourced comment. Next step would be for me to provide the even better source I'd read for my original comment.

Here we get a little more interesting for doing science. I don't remember that source. For me the source I read beats a Wikipedia article. I read the source and could see how good it was, by an author of what expertise, etc.. But where does that leave you? Some might be tempted to figure that I'm a good guy (of course I am! :-) so I must be more trustworthy than a Wikipedia article. As a social matter, that works. But the science-minded here know what you're left with, for thinking about science, is that Wikipedia article. Science is not only about knowledge, but about shared and sharable knowledge.

If someone can't share the source or support for their knowledge, it isn't science for you. They might be right. But without that sharability, it isn't science. The more interesting, surprising, or important the point is, the more important you be able to follow up someone's first comment with a source.

Even when I'm within my professional areas, therefore, requests for sources where you can learn more are welcome. I won't always be providing them in the original note as it can make for awfully slow reading for you. Still, when you're surprised by something (say, because it contradicts something you thought was true), do ask for a source. In my serious areas, I can provide you with them, and usually several. Of course I'll ask you to be even-handed about it -- while questioning where my source is, please go back to where you learned the point I'm disagreeing with and check out the sources on that, too.

Old comment: "It isn't what you don't know that's the problem. It's what you know that isn't so."

Tuesday, September 9, 2008

Elegant Gas Compression Technology

Both a matter of some fundamental science and a particularly elegant technology. The technology is the Australian Aboriginal firestarter. What makes it elegant is that it is simple, easy to use, makes a good demonstration of a physical principle, and is extremely obvious -- in retrospect. The way it works is that you take an airtight tube with a snugly fitting piston in it. Put the tube, with the piston at the top end, over some dry tinder. Then slam down the piston, while holding the tube hard against the ground.

The principle is that as you compress a gas, while keeping it insulated from the surroundings, it heats up. Compress it enough, and you get to the ignition point of your tinder. As far as I've seen, after doing a little looking when I saw the description of the Australian Aboriginal firestarter, they are the first people by some thousands of years to make use of the technology. Brilliant! elegant!

The next technology to make use of the principle, as far as I know, is the Diesel engine, late 1800s.

For our climate concerns, we don't deal with such extreme or rapid compressions. But the principle holds: If we take a blob of air, insulated from the surroundings and increase the pressure on it (because we're pulling it from lower pressure part of the atmosphere to higher), it warms up. The converse is also true -- if we decrease the pressure (by moving to a higher (lower pressure) part of the atmoshere), then the gas cools off.

This is another part of dealing with potential temperatures -- we'll get rigorous about by just how much the air warms or cools. But that's another note.


If you have other examples of technologies or cultures using air compression heating between the firestarter and Diesel engine, please do mention them here or by email to me at plutarchspam at aim dot net. (It's a valid address, the 'spam' in it is part of the name. You could also use the bobg at radix dot net that is in my profile, but I get so much unfiltered spam there that chances are good I'll miss your note.)

Science is Always Changing

Earlier I repeated the common comment that climate is always changing, with some of my own thoughts about what that may or may not mean. Time to think about science itself, and how it is always changing. Same as the fact that climate is always changing doesn't mean that we can necessarily ignore the current changes, we can't always count on science to change in a way that we might (for other reasons) like. On the extreme, there are folks who say 'since science is always changing, so you can't tell me that I'm wrong to think that the moon is made of green cheese.' Replace moon being made of green cheese with any of very man other statements that are equally incorrect, today and for the future. While the science changes, there are some fairly specific ways in which it changes that exclude a number of those hoped-for conclusions.

One part that changes little or not at all are the observations themselves. Once you've got the observations, there they are. If your backyard thermometer says 125 F at a certain time, then that's what it said -- as best the thermometer and clock could be read. But what does that reading mean? Well, that's subject to some change. We may discover that the type of thermometer you used is error-prone, or biased, or has a bias that grows with age, or that you put it in direct noontime sun, or that it's right over your grill or dryer vent, or ... So, while the reading itself won't be changing, trying to make use of it quite possibly will. The meaning of the observation may well change as we learn more. Quite possibly in time we'll discover that your thermometer's reading doesn't tell us anything we want to know about the atmosphere, as such, no matter how informative it turns out to be about when you have cook outs. That's one avenue of change in science -- observations of some kinds are discovered to be unreliable for what we're trying to do. Conversely, observations that we didn't realize were useful for some problems, someone bright discovers how to make them useful in that new area. It took over a decade, but the MSU is now useful for studying some things about climate.

A non-change is labeling. Recently, Pluto was reclassified as not being a planet after 70+ years of being called a planet. Absolutely nothing meaningful about Pluto changed -- its orbit takes the same time, its size is the same, its composition, estimated history, and so on, are all still the same. Calling it a planetoid or plutinoid or whatever makes no difference to the science as such. In biology, classification can be a much more serious matter, but I'll let biologists discuss that.

Some sorts of things change, and poor science teaching or journalism would have you think it was 'big'. But that's the weakness of the teaching or reporting. When I was much younger, Pluto had no moon, Jupiter had 12. Now, Pluto has 1 moon (Chiron) and it's huge compared to what it orbits, and Jupiter and Saturn now have so many 'moons' that there's another labelling argument about just what constitutes a 'moon'. In the bad teaching, students have to memorize trivia like how many moons Jupiter has. Tomorrow or next week, we'll discover another. But this doesn't change the science any. The science isn't in the list of moons. It is in things like 'why are there moons?', 'how does a body hang on to a moon?' , 'how long can a body hang on to a moon?'. (I sometimes teach astronomy. Numbering the moons never comes up on my tests; explaining why Jupiter has any does. The science of this won't be changing drastically, but the number of moons will. I want my students to focus on the more fundamental parts.)

A different sort of change, but not one that necessarily causes difficulties for prior science, is getting more and better observations. We've got vastly more meteorological data today thanks to satellites and radar systems than 40 years ago. Yet ... high and low pressure are still high and low pressure, storms happen, hurricanes happen, and we are still limited in how far ahead weather (as opposed to climate) can be forecast. What changed are things like how close to the limits we can come (and we do do a lot better than 40 years ago!). Sometimes we get enough new observations that a simple earlier explanation about, say, why there isn't much water vapor in the stratosphere turns out to be wrong. The fact that there isn't much water vapor up there doesn't change. But the explanation changed when we got enough new data to disprove the old one.

Where we see a lot of rapid change is in areas where we go from unable to say much -- because we have no suitable observations -- to being able to see the things. Astronomy has gained enormously from this in the last 40 years. Before, we were stuck with telescopes that were looking through the turbulent earthly atmosphere, that filtered out ultraviolet, most infrared, and all x-ray and gamma-ray energy. Now, we can see all those things from satellite and discover ... tons of things. (Yay!) But when looking at new things, though, we tend not to change much about what we used to think. We thought those things about things we could observe in the first place. Sometimes the new look, as with more data on stratospheric water vapor, does help us change our explanations. For biology, being able to sequence DNA has been another revolution in being able to see things. But we already knew that DNA existed and had a major role in heredity. What changed isn't that sort of thing, but being able to see details and to compare details. (Again, Yay!)

Three major revolutions of the 20th century for which I know something good about the science are quantum mechanics, relativity, and plate tectonics. By the way, three books to read are Thirty Years That Shook Physics by George Gamow, Relativity by Albert Einstein, and On the Origin of Continents and Oceans by Alfred Wegener (4th edition in English is available from Dover). All three are by prime players, and are readable by nonspecialists. In the case of Wegener, his ideas (Continental Drift) are not the revolution that actually happened (Plate Tectonics), but it's extremely interesting to read.

In all three cases, the theories, ideas, and methods that existed before the revolution were retained -- in the areas they'd been successfully tested in. When I try to examine how radiation is bent around a rain drop or scattered off a gas molecule, I use Maxwell's equations, not Quantum Electrodynamics (the part of quantum theory you'd use). For this problem, Maxwell's pre-quantum equations are sufficient. When we look at atmospheric circulation, we use Newton's equations, not Einstein's. If wind speeds were near the speed of light, we'd have to pay attention to Einstein and abandon Newton. Fortunately, wind speeds top out at 300 m/s, rather than 300,000,000 m/s, and we can do the simpler physics. Plate tectonics was a spectacular revolution, in the sense of unifying a number of observations (locations of mountains, volcanoes, and earthquakes, for instance) with a simple idea (plates bashing in to each other, floating around at the surface of the earth). But ... nothing about the age of the earth, the composition of the various plates, thhe history of the rocks, etc. was changed by this revolution.

Particularly in quantum mechanics and relativity, this is called 'convergence'. As conditions move from the extreme conditions that lead to the new idea, to the conditions the old ones were tested in, the new idea has to give the same (at least within old observational tolerances) answers as the old method. This gives some sharp limits on just how large a revolution you can count on for the future. For instance, the earth was known to be much older than 6000 years by the early 1800s. That isn't going to be changing -- there are too many different lines (whether quantum mechanics, classical mechanics, plate tectonics, mineralogy, ...) that point to a much older earth. Now maybe the current (if I remember correctly) 4.55 billion years will get revised to 4.6 or 4.5 billion. But a 6000 year old earth is gone for good from science. Ditto a flat earth, or getting rid of the greenhouse effect, or a number of other things.

Since things do change, however, we're in a position to be informed spectators about some of them. Some areas have unsettled questions, and we can watch the progress in getting them nailed down. One which has seen some recent progress is expectable sea level change. After observations that some glaciers were moving and melting much faster than previously expected, estimates of sea level change for the next century went from fairly well constrained (we thought) to 'well, something like 50 cm, not counting whatever the things we don't know very well any more might do.' That left the door open for possibly not much addition (if the things that surprised us were already done being surprising) to quite a lot of addition (if they could continue speeding up). The current estimate by Pfeffer and others is now for 0.8 to 2 meters (2.5 to 7 feet) of sea level rise by 2100. This will be changing as more people look at just what the (current) uncertainties are, and what factors might be coming in to play. Keep your eyes on this one. On the other hand, it's extraordinarily unlikely that the sea level would start dropping.

Thursday, September 4, 2008

The Atmosphere in the Vertical

I'd started with a note on potential temperature, the vertical structure of the atmosphere, gas laws, and a few other things. After much typing, I realized that I'd done much typing and hence you'd have a lot of reading to do. So I'll divide things up a bit and hope that the result is more manageable and understandable.

In the introducing the atmosphere note, I mentioned that in the lower atmosphere (troposphere) and upper atmosphere (mesosphere), temperatures decreased with height. This is true, but if you've heard about 'hot air rises', I hope it gave you some discomfort. We'll ultimately solve the problem by understanding potential temperature. But first, let's think some more about the vertical.

The troposphere is where we live, and where almost all of what we think of as 'weather' happens. It is between something like 10 and 20 km thick. Above this, is the stratosphere. It is where the ozone layer is, and temperatures are constant or increasing with height. It runs from the top of the troposphere, that 10-20 km elevation, to the base of the mesosphere (something like 50 km). The mesosphere runs from the something like 50 km to something like 80 km. Finally, above the 80 km, is the thermosphere, by which time the atmosphere is so thin that different processes take over.

The 'something like' values there should make you uncomfortable. Even more so, that 10-20 km thickness of the troposphere. Why should it vary by so much? Partly, what is happening is that in hotter parts of the atmosphere (the tropics), pressure drops more slowly with height than in cold parts (the polar regions). If we use a thickness that relates to mass (pressure) rather than temperature, we find that the troposphere is between 700 (polar) and 900 (tropics) mb thick. Still thicker in the tropics, but no longer a factor of 2, so this is progress.

Average atmospheric pressure at sea level is 1013.25 mb. This is a pressure of a column of air over you standing at that point. So the tropospheric thicknesses say that the troposphere is 70-90% of the mass of the atmosphere over a given point. If we used elevations instead, then the troposphere is only 10-20% of your vertical -- the remainder being the very thin gases of the stratosphere, and even thinner gases in the mesosphere.

The stratosphere, then, is from the 100-300 mb level (above 70-90% of the mass of the atmosphere) to about the 1 mb level. In other words, it and the troposphere jointly account for 99.9% of the mass of the atmosphere. The mesosphere runs from about 1 mb to about 0.01 mb. Add this in, and we've got 99.999 % of the atmosphere.

In terms of building models of the atmosphere, and for observing it, the pressure levels make many things easier. If we took meters (or feet) in the vertical, and spaced evenly, then we waste 80-90% of the levels on parts of the atmosphere that don't have the weather we're interested in. If we take pressure (the mb -- millibars), then we only 'waste' 10-30%, a big improvement. For observing, it is easier to build a pressure gauge than to sense the elevation above the ground. (Often, even elsewhere, pressure is used instead of elevation, such as for aviation). Radiosondes, then, report in terms of the pressure they're experiencing at the time of an observation.

If we were being strictly correct, I should be replacing references to millibars (mb) with hectopascals (hPa). The latter (well, Pascal) is the official scientific unit for pressure. 1 mb = 1 hPa, though, so we can just swap them. In other units, 1 mm mercury is a bit more than 1 mb, there being 760 mm Hg in a standard atmosphere to the 1013.25 mb. Inches of mercury ... well, my great grandmother's barometer uses that (only) but it's not even close to what's used in science and hasn't been for a long time. Then there are pounds per square inch (14.7 being standard atmosphere), and a host of others. I'm lazy about the mb versus hPa label because a) it's what I learned first b) it's in more common use in the US than hPa and c) most importantly, the conversion factor is 1. Still, if you're a younger reader, get used to the hPa and practice translating to it in your mind rather than perpetuate the erroneous mb.

Tuesday, September 2, 2008

Question Place 2

Have questions about science, particularly relating to oceanography, meteorology, glaciology, climate? Here's a place to put them. For that matter, about running as well. Maybe I'll answer them here, and maybe (as happened back when with Dave's note back in May) they'll prompt a post or three in their own right.

Monday, September 1, 2008

Summary1 of Simplest Climate Model

First, a much-delayed suggestion to go look at Atmoz's calculation of the effect of the earth being an oblate spheroid rather than a perfect sphere. More about that in a moment.

The simplest model discussion so far occupies a few posts:
In making the correction for the earth being an oblate spheroid rather than a perfect sphere, Atmoz found the 4 in the original formula (simplest model) should be changed to 4.00449, at least when we're at the equinoxes. As the earth's orientation changes, it will present somewhat different areas to the sun. Imagine a pancake, as an example of an extremely oblate spheroid. Put a pencil through it (the short direction) and consider that the rotation axis of your very oblate planet. Have a friend hold it up and tilt it some from the vertical. As you walk around your friend, you'll see the pancake edge on (equinoxes) and then more of the top or bottom as you get to the solstices. The amount of tilt will affect how much your view varies. The more tilt, the more variation. The earth is tilted about 23.5 degrees from the sun.

Atmoz found about a 0.07 degree C difference for the earth's temperature for oblate (and at equinox) vs. perfect sphere. Probably something about that size too for the solstices too, though we need the computation to be sure.

Here I've used an albedo of 0.30 and solar constant of 1367 W/m^2. Atmoz preferred 0.29 and 1366, respectively. Thanks to our looking into the sensitivity of the model in the analysis note, we know that this amounts to only about a 1 K difference in computing the earth's temperature. That suggests a few things. One is, the surface temperature averaging 288 K is unlikely to be explainable by modest changes from the values we used. Another is that it'd be a good idea to chase down some good sources on what values should be used, exactly. Or that at least we'll want to find out what the levels of uncertainty are about these observable quantities.