All the shining perfumes I splashed on my head,
And all the fragrant flowers I wore,
Soon lost their scent.
Everything I put between my teeth
And dropped into my ungrateful belly
Was gone by morning.
The only things I can keep
Came in through my ears.
Callimachus ca. 310-240 BC
In Pure Pagan: Seven Centuries of Greek Poems and Fragments, selected and tranlated by Burton Raffel.
17 December 2008
16 December 2008
Science and consensus
Sometimes people are right about a statement and then draw the wrong conclusion about it. Noting that science doesn't 'do' consensus is such a case. By the time you've progressed to the point of general agreement -- and all a consensus is is general agreement, not universal agreement -- the point has dropped out of being live science.
The science is in the parts we don't understand well. That's effectively part of the definition for doing science. Dropping two rocks of different mass off the side of a building and seeing which one hits the ground first is no longer science. We reached consensus on that some time back. Now if you have a new experiment which tests something interesting (i.e., we haven't tested that one to death already), have at it and do that science.
I didn't appreciate it properly at the time, but a sign on the chemistry department door in my college put it best: "If we knew what we were doing, it wouldn't be science." The live part of science involves learning new things. If you already know what will happen, you're not learning new things so aren't doing science. After you've learned something new, and others have tested it and confirmed your learning, then we have a piece of scientific knowledge. It isn't live science any more, but it's a contribution to the world and can be used for other things. A consequence of this is that you wind up knowing a lot if you stay active in doing science. But it isn't the knowing that motivates scientists (certainly not me) it is the finding out new things about the world.
So we have two sides to science -- the live science, where you don't have consensus -- and the consensus, the body of scientific knowledge that can be used for other things (engineering, decision making, ...). The error made by the people who try to deny, for example, the conclusions of the IPCC reports because 'science doesn't do consensus' is that they're confusing the two sides. The live science, which is summarized in the IPCC reports, doesn't have consensus. That's why it's live and why folks have science to do in the area. The body of scientific knowledge, which is also summarized in the reports, does have a consensus, which is being described in detail as to what the consensus is about and how strong it is.
It is possible that the consensus is wrong in its conclusions. But the folks denying it need not only for it to be wrong, but to be wrong in a very specific way. If they wanted to make scientific arguments, which is what, say, Wegener did in advancing continental drift in the 1920s, they can do so. But it is their responsibility to make the arguments scientifically and back them with strong scientific evidence, as Wegener himself noted. They don't do that.
I'll follow up the matter of the consensus having to be wrong in a specific way in a different note or two at a later date.
The science is in the parts we don't understand well. That's effectively part of the definition for doing science. Dropping two rocks of different mass off the side of a building and seeing which one hits the ground first is no longer science. We reached consensus on that some time back. Now if you have a new experiment which tests something interesting (i.e., we haven't tested that one to death already), have at it and do that science.
I didn't appreciate it properly at the time, but a sign on the chemistry department door in my college put it best: "If we knew what we were doing, it wouldn't be science." The live part of science involves learning new things. If you already know what will happen, you're not learning new things so aren't doing science. After you've learned something new, and others have tested it and confirmed your learning, then we have a piece of scientific knowledge. It isn't live science any more, but it's a contribution to the world and can be used for other things. A consequence of this is that you wind up knowing a lot if you stay active in doing science. But it isn't the knowing that motivates scientists (certainly not me) it is the finding out new things about the world.
So we have two sides to science -- the live science, where you don't have consensus -- and the consensus, the body of scientific knowledge that can be used for other things (engineering, decision making, ...). The error made by the people who try to deny, for example, the conclusions of the IPCC reports because 'science doesn't do consensus' is that they're confusing the two sides. The live science, which is summarized in the IPCC reports, doesn't have consensus. That's why it's live and why folks have science to do in the area. The body of scientific knowledge, which is also summarized in the reports, does have a consensus, which is being described in detail as to what the consensus is about and how strong it is.
It is possible that the consensus is wrong in its conclusions. But the folks denying it need not only for it to be wrong, but to be wrong in a very specific way. If they wanted to make scientific arguments, which is what, say, Wegener did in advancing continental drift in the 1920s, they can do so. But it is their responsibility to make the arguments scientifically and back them with strong scientific evidence, as Wegener himself noted. They don't do that.
I'll follow up the matter of the consensus having to be wrong in a specific way in a different note or two at a later date.
15 December 2008
How to decide climate trends
Back to trying to figure out what climate might be, conceptually, and then trying to figure out what numbers might represent it. A while ago, I looked at trying to find an average value (in that case, for the global mean surface temperature) and found that you need at least 15 years for your average to stabilize, 20-30 being a reasonable range. Stabilize means for the value of the average for a number of years to be close to the average to a somewhat longer or shorter span of years. While weather can and does vary wildly, climate, if there is such a thing, has to be something with slower variation.
But most tempests in blog teapots are about trends. I'm going to swipe an idea from calculus/analysis and have a look at deciding about trends. One of the reasons to take a variety of courses, including ones that may not seem relevant at the time, is to have a good store of ideas to swipe. Er, a strong research background.
As before, I'm going to require that the trend -- to be specific, that the slope of the best fit (in terms of the sum of the squares of the errors being as small as possible) line should become stable in the above sense. This is sometimes referred to as ordinary least squares, and even more breezily as OLS. I don't like that acronyming since I keep reading it as optical line scanner, curtesy of a remote sensing instrument.
There's a little more, however, that we can do. When I looked at averages, I took them centered on a given month or year. So estimating the climate temperature for 1978, say, involved using data from 1968 to 1988. The reason, which I didn't explain at the time, is that if climate is a slowly changing thing, then the temperature a year later tells you as much about this year as the temperature a year earlier. And, as a rule, tells you more about this year than the observations 2 years earlier.
My preference for a centered data span conflicts with what people would generally like to do -- to know what the climate of 2008 (for instance) was during, or at least only shortly after, 2008. On the other hand, you can't always get what you want. The priority for science is to represent something accurately. If you can't do that, then you have to keep working. A bad measure (method, observation, ...) is worse than no measure.
So we have two methods to look at already: 1) compute the trend using some years of data centered on our time of interest and 2) compute the trend using the same number of years of data but ending with our time of interest . I'll add a third: 3) compute the trend using the same number of years of data but starting with the year of interest. (This is the addition prompted by Analysis.)
In numerical analysis, we refer to these as the forward, centered, and backwards computations (we move forward towards the point/time of interest, we center ourselves at the point/time of interest, or we look backwards to the point of interest). For a wide variety of reasons, we generally prefer in numerical analysis to use centered computations. In real analysis (a different field), where one deals with infinitesimal quantities, it is required that the forward and backward methods give the same result -- or else the quantity (I'm thinking about defining a derivative) is considered not to exist at that point. We're not dealing with infinitesimals here, so can't require that they be exactly equal. On the other hand, if the forward and backward methods give very different answers from each other, it greatly undermine our confidence in those methods. If the difference is large enough, we'll have to throw them out.
So what I will be doing -- note that I haven't done the computations yet, so I don't know how it will turn out -- is to
1) take a data set of a climate sort of variable (I'll pick on mean surface air temperature again since everybody does; specifically, the NCDC monthly global figures)
2) for every year from 31 years after the first year of data to 31 years before the last year of data
(I'm taking 31 be able to compute forward slopes for the first year I show over periods as long as that, likewise the 31 years at the end for backwards)
I)
a) Compute forward slope using 3-31 years (for 3, 5, 7, 9, .. 31)
b) Compute centered slope using 3-31 years (meaning the center year plus or minus 1, 2, 3, 4 ... to 15)
c) Compute backward slope using 3-31 years (again 3, 5, 7, 9, .. 31)
II)
a-c) For each, look to see how long a period is needed for the result of the slope computation to settle down (as we did for the average). I expect that it will be the same 20-30 years, maybe longer, that the average took. If it's a lot faster, no problem. If it's longer, then I have to restart with, say, the data more than 51 years from either end.
3) Start intercomparisons:
a) compute differences between forward and backward slopes (matching up the record length -- only look at 3 years forward vs. 3 years backward, not vs. 23 years backward), look for whether the differences tend toward zero with length of record used. If not, likely rejection of forward/backward method. If so, then the span where it is close to zero is probably the required interval for slope determination.
b) ditto between the forward and centered slope computations. The differences will be smaller than 3a since half the data the centered computation uses is what the forward computation also used. Still, I'll look for whether the two slopes converge towards each other. If they don't, then the forward computation is toast.
4) Write it up and show you the results. I'm planning this for next Monday. Those of you with the math skills are welcome (and encouraged) to take your own shot at it, especially if you use more sophisticated methods than ordinary least squares, or other data sets than NCDC. But I'll ask you to hold on putting them to your blogs until after this one appears.
I'll also be providing links to sources (tamino, real climate, stoat, ... and others to be found) which have already done similar if not quite the same things.
Part of the idea here is to illustrate to my proverbial jr. high readers what a science project looks like, start to finish. Some aspects are:
But most tempests in blog teapots are about trends. I'm going to swipe an idea from calculus/analysis and have a look at deciding about trends. One of the reasons to take a variety of courses, including ones that may not seem relevant at the time, is to have a good store of ideas to swipe. Er, a strong research background.
As before, I'm going to require that the trend -- to be specific, that the slope of the best fit (in terms of the sum of the squares of the errors being as small as possible) line should become stable in the above sense. This is sometimes referred to as ordinary least squares, and even more breezily as OLS. I don't like that acronyming since I keep reading it as optical line scanner, curtesy of a remote sensing instrument.
There's a little more, however, that we can do. When I looked at averages, I took them centered on a given month or year. So estimating the climate temperature for 1978, say, involved using data from 1968 to 1988. The reason, which I didn't explain at the time, is that if climate is a slowly changing thing, then the temperature a year later tells you as much about this year as the temperature a year earlier. And, as a rule, tells you more about this year than the observations 2 years earlier.
My preference for a centered data span conflicts with what people would generally like to do -- to know what the climate of 2008 (for instance) was during, or at least only shortly after, 2008. On the other hand, you can't always get what you want. The priority for science is to represent something accurately. If you can't do that, then you have to keep working. A bad measure (method, observation, ...) is worse than no measure.
So we have two methods to look at already: 1) compute the trend using some years of data centered on our time of interest and 2) compute the trend using the same number of years of data but ending with our time of interest . I'll add a third: 3) compute the trend using the same number of years of data but starting with the year of interest. (This is the addition prompted by Analysis.)
In numerical analysis, we refer to these as the forward, centered, and backwards computations (we move forward towards the point/time of interest, we center ourselves at the point/time of interest, or we look backwards to the point of interest). For a wide variety of reasons, we generally prefer in numerical analysis to use centered computations. In real analysis (a different field), where one deals with infinitesimal quantities, it is required that the forward and backward methods give the same result -- or else the quantity (I'm thinking about defining a derivative) is considered not to exist at that point. We're not dealing with infinitesimals here, so can't require that they be exactly equal. On the other hand, if the forward and backward methods give very different answers from each other, it greatly undermine our confidence in those methods. If the difference is large enough, we'll have to throw them out.
So what I will be doing -- note that I haven't done the computations yet, so I don't know how it will turn out -- is to
1) take a data set of a climate sort of variable (I'll pick on mean surface air temperature again since everybody does; specifically, the NCDC monthly global figures)
2) for every year from 31 years after the first year of data to 31 years before the last year of data
(I'm taking 31 be able to compute forward slopes for the first year I show over periods as long as that, likewise the 31 years at the end for backwards)
I)
a) Compute forward slope using 3-31 years (for 3, 5, 7, 9, .. 31)
b) Compute centered slope using 3-31 years (meaning the center year plus or minus 1, 2, 3, 4 ... to 15)
c) Compute backward slope using 3-31 years (again 3, 5, 7, 9, .. 31)
II)
a-c) For each, look to see how long a period is needed for the result of the slope computation to settle down (as we did for the average). I expect that it will be the same 20-30 years, maybe longer, that the average took. If it's a lot faster, no problem. If it's longer, then I have to restart with, say, the data more than 51 years from either end.
3) Start intercomparisons:
a) compute differences between forward and backward slopes (matching up the record length -- only look at 3 years forward vs. 3 years backward, not vs. 23 years backward), look for whether the differences tend toward zero with length of record used. If not, likely rejection of forward/backward method. If so, then the span where it is close to zero is probably the required interval for slope determination.
b) ditto between the forward and centered slope computations. The differences will be smaller than 3a since half the data the centered computation uses is what the forward computation also used. Still, I'll look for whether the two slopes converge towards each other. If they don't, then the forward computation is toast.
4) Write it up and show you the results. I'm planning this for next Monday. Those of you with the math skills are welcome (and encouraged) to take your own shot at it, especially if you use more sophisticated methods than ordinary least squares, or other data sets than NCDC. But I'll ask you to hold on putting them to your blogs until after this one appears.
I'll also be providing links to sources (tamino, real climate, stoat, ... and others to be found) which have already done similar if not quite the same things.
Part of the idea here is to illustrate to my proverbial jr. high readers what a science project looks like, start to finish. Some aspects are:
- lay out a method before you start, and consider what it means both if the results are as you expect them to be, and if they're the other way around.
- consider what you'll do if they're different
- look at what other people have already done
- write it all up so that others can learn from what you did
10 December 2008
More blogs
I read quite a few more blogs than are on the blogroll. As I mentioned in the original blogroll note, these are ones that link over here. (And might be missing some. Please let me know if I am.)
A recent addition is my daughter's, http://evenmoregrumbinescience.blogspot.com/ It'd be good to see more comments on her post about teaching physics to women (see 'No Silver Bullet', or 'Teaching Women Science'). Last I looked, it's only the two of us. And since I was the one who taught her how to build rockets, her responses aren't exactly surprises.
More from my reader:
Climate
Other, mostly biology:
Due reminder: These other blogs, and my comments on them, may not be as mild-mannered as here.
A recent addition is my daughter's, http://evenmoregrumbinescience.blogspot.com/ It'd be good to see more comments on her post about teaching physics to women (see 'No Silver Bullet', or 'Teaching Women Science'). Last I looked, it's only the two of us. And since I was the one who taught her how to build rockets, her responses aren't exactly surprises.
More from my reader:
Climate
- Climate Change: The Next Generation
- ClimateSpin
- NASA: JPL
- Real Climate
- William Connolley: Stoat
- Coby Beck: A Few Things Ill-Considered
Other, mostly biology:
- Phil Plait: Bad Astronomy
- Chris Nedin: Ediacaran
- ERV
- John Wilkins: Evolving Thoughts
- Mark Chu-Carroll: Good Math, Bad Math
- Chris Mooney and Sheril Kirshenbaum: The Intersection
- Two Minds
- PZ Myers: Pharyngula
- Troy Britain: Playing Chess with Pigeons
- Mike Dunford: The Questionable Authority
- Orac: Respectful Insolence
Due reminder: These other blogs, and my comments on them, may not be as mild-mannered as here.
09 December 2008
Who can do science
Everyone can do science. Most people, especially younger children, do so on a routine basis. Science is just finding out more about the universe around you. Infants playing peekaboo are conducting a profound experiment. They cover their eyes and everything vanishes. When they uncover their eyes, everything is back. Wow! Things have persistent existence! Even if you can't see them, they're still there. Then you cover your eyes, and the child can still see you. Existence continues. Whoa. Elaborate the game. One of you hides behind something. Then pops out. Whee! Things continue to exist even with your eyes open, even if they pass from view. No wonder children giggle at the game. This is a profound discovery about the nature of the universe.
In similar vein, we can all speak prose in our native languages, or run. The thing which becomes a question later along is whether you are doing it at professional level. I run, for instance. Most of us can. And, with appropriate training, almost all of us can, say, run a marathon. Very few us of can run a marathon, however, at the pace that elite runners do. Even fewer could do so without undertaking very serious, elite-level, training to make the attempt. Similarly, while we can all talk, and most of us write, very, very few are realistic candidates for the best seller's lists, or Nobel prizes.
So it goes with science. Doing it at a professional level is a lot harder than doing it at all. One thing you often encounter in coming up with ideas is to discover that your wonderful, creative, idea was already thought of. I give myself points (when looking outside my field) for how recently it was thought of. More than 300 years ago, only 1 point. Less than 200 is more, and less than 100 even more. Every so often I manage to go out of field and come up with a new idea (to me) that professionals thought of only 30-50 years ago. On the rare occasion, I come up with one that they didn't come up with until within the last 30 years. I give myself a lot of points for those. They're pretty rare.
That's one part of doing science at a professional level -- your idea or discovery has to be not only new to you, but new to the world. Consequently, a lot of the training for becoming a professional involves learning what is already known. The answer is, unfortunately for we who'd like to make a grand splash of some kind, a lot. Worse, there are now centuries of very creative, knowledgeable people who have been working at it. Coming up with something novel is, therefore, hard.
A high school friend illustrated this neatly, if accidentally, for me. We met up over a holiday early in our college careers and he was complaining about the lack of creativity in computer science. For instance, thinking that something new and good was to be had by looking at 3 value logic systems rather than 2 value as was expressed in binary computers. He was confident such an idea would never be looked at. The week before, I was at a presentation about 3 value logic circuits and why they'd be useful. And the novel part was not the idea, which was much older, but how the speaker planned to implement it in hardware.
Conversely, if you'd like to do something novel, you're much better off looking at some area that is new, using new equipment, etc., so hasn't had a long history for people to work out a large number of ideas. In that vein, it's much easier to pull off on the satellite remote sensing of tropospheric temperatures for climate (a topic less than 20 years old) than for the surface thermometer record (well over 100 years old). A fellow I know published in the prestigious Geophysical Research Letters, largely on the strength of this point. Paper came out in 2003. He looked at how the Spencer and Christy satellite algorithm worked, and realized that it assumed something which in high latitudes was not a good assumption. He then worked out what the implications were (i.e., the trends in sea ice cover would be falsely reported as trends in temperatures), and documented it well enough to be published in the professional literature:
Swanson R. E., Evidence of possible sea-ice influence on Microwave Sounding Unit tropospheric temperature trends in polar regions, Geophys. Res. Lett., 30 (20), 2040, doi:10.1029/2003GL017938, 2003. (You can follow this up at http://www.agu.org/)
Now, the thing is, Richard did not have a doctorate. He had a master's. And his master's was not in science, it was engineering. What mattered is that he saw something that hadn't been noticed, documented it well, and submitted it to the professional publication. And he got published in this high profile journal even though he had no PhD, nor even previously worked in the field. I keep him in mind when people talk about the 'conspiracy to keep out' ... whoever.
On a different hand, as an undergraduate I did do work worth a coauthorship on a significant journal paper. (Significant journal, that is, whether the paper was significant, I leave to its readers.) But that was working for a faculty member, and while my contribution was indeed (I realized later, I assumed that Ed was simply a nice guy -- which he was, but that turned out to be a different matter) worthy of a coauthorship, I couldn't have gotten the project started on my own. Once started in a fruitful area, I could have finished it, but for a professional, you want to see the person be able to find out what the fruitful area is.
So how young can you go; how much experience is needed? Well, if you choose right, and are creative enough, jr. high. My niece managed a science fair project last year that I still encourage her to write up for serious publication. She hit on an idea in an area that hasn't been studied a lot already (it's new and people there have been assuming an answer, she documented it -- good science) and a way of testing it (ditto) and collected the data and evaluated it scientifically. Yay! She might need a hand on the professional writing and statistics description, but the science part, she nailed solo.
Where does that leave us as readers of blogs and such? Alas, it means we have to think. The presence of a PhD is not a guarantee of correctness. Nor is the absence of one a guarantee of error. And this remains true even if we consider what area the PhD was in and the like. What is more reliable is that the older an area (surface temperature record interpretation, for instance) the less likely it is that someone can make a contribution or correction without doing quite a lot of work. The field is long past the point where it's likely that they've not noticed the urban heat island, for instance. (I haven't searched seriously, but have already run across reference to it from the early 1950s.) Blog commentators who plop this one down as if it were an ace of trump: "Ha, they didn't consider the urban heat island. Therefore, I can conclude whatever I want." or the like, can speedily be added to your list of unreliable sources. Urban heat island has been considered, quite often, for longer than they've been alive. It's an old field. Tackling ARGO buoys, less overwhelming an obstacle. (But be sure your math is up to the work! )
As younger people (those of you who are, which doesn't seem to be many, alas; but you parents remember it for your kids' sake) it means that the time to start working on doing science is today. Do your own science (meaning, learn things about the world), and try to do some professional level science too (try to learn things that nobody else has figured out yet). The heart of science is in finding things out. This doesn't have to have anything to do with what you're doing in school. But things that interest you, whatever that may be.
In similar vein, we can all speak prose in our native languages, or run. The thing which becomes a question later along is whether you are doing it at professional level. I run, for instance. Most of us can. And, with appropriate training, almost all of us can, say, run a marathon. Very few us of can run a marathon, however, at the pace that elite runners do. Even fewer could do so without undertaking very serious, elite-level, training to make the attempt. Similarly, while we can all talk, and most of us write, very, very few are realistic candidates for the best seller's lists, or Nobel prizes.
So it goes with science. Doing it at a professional level is a lot harder than doing it at all. One thing you often encounter in coming up with ideas is to discover that your wonderful, creative, idea was already thought of. I give myself points (when looking outside my field) for how recently it was thought of. More than 300 years ago, only 1 point. Less than 200 is more, and less than 100 even more. Every so often I manage to go out of field and come up with a new idea (to me) that professionals thought of only 30-50 years ago. On the rare occasion, I come up with one that they didn't come up with until within the last 30 years. I give myself a lot of points for those. They're pretty rare.
That's one part of doing science at a professional level -- your idea or discovery has to be not only new to you, but new to the world. Consequently, a lot of the training for becoming a professional involves learning what is already known. The answer is, unfortunately for we who'd like to make a grand splash of some kind, a lot. Worse, there are now centuries of very creative, knowledgeable people who have been working at it. Coming up with something novel is, therefore, hard.
A high school friend illustrated this neatly, if accidentally, for me. We met up over a holiday early in our college careers and he was complaining about the lack of creativity in computer science. For instance, thinking that something new and good was to be had by looking at 3 value logic systems rather than 2 value as was expressed in binary computers. He was confident such an idea would never be looked at. The week before, I was at a presentation about 3 value logic circuits and why they'd be useful. And the novel part was not the idea, which was much older, but how the speaker planned to implement it in hardware.
Conversely, if you'd like to do something novel, you're much better off looking at some area that is new, using new equipment, etc., so hasn't had a long history for people to work out a large number of ideas. In that vein, it's much easier to pull off on the satellite remote sensing of tropospheric temperatures for climate (a topic less than 20 years old) than for the surface thermometer record (well over 100 years old). A fellow I know published in the prestigious Geophysical Research Letters, largely on the strength of this point. Paper came out in 2003. He looked at how the Spencer and Christy satellite algorithm worked, and realized that it assumed something which in high latitudes was not a good assumption. He then worked out what the implications were (i.e., the trends in sea ice cover would be falsely reported as trends in temperatures), and documented it well enough to be published in the professional literature:
Swanson R. E., Evidence of possible sea-ice influence on Microwave Sounding Unit tropospheric temperature trends in polar regions, Geophys. Res. Lett., 30 (20), 2040, doi:10.1029/2003GL017938, 2003. (You can follow this up at http://www.agu.org/)
Now, the thing is, Richard did not have a doctorate. He had a master's. And his master's was not in science, it was engineering. What mattered is that he saw something that hadn't been noticed, documented it well, and submitted it to the professional publication. And he got published in this high profile journal even though he had no PhD, nor even previously worked in the field. I keep him in mind when people talk about the 'conspiracy to keep out' ... whoever.
On a different hand, as an undergraduate I did do work worth a coauthorship on a significant journal paper. (Significant journal, that is, whether the paper was significant, I leave to its readers.) But that was working for a faculty member, and while my contribution was indeed (I realized later, I assumed that Ed was simply a nice guy -- which he was, but that turned out to be a different matter) worthy of a coauthorship, I couldn't have gotten the project started on my own. Once started in a fruitful area, I could have finished it, but for a professional, you want to see the person be able to find out what the fruitful area is.
So how young can you go; how much experience is needed? Well, if you choose right, and are creative enough, jr. high. My niece managed a science fair project last year that I still encourage her to write up for serious publication. She hit on an idea in an area that hasn't been studied a lot already (it's new and people there have been assuming an answer, she documented it -- good science) and a way of testing it (ditto) and collected the data and evaluated it scientifically. Yay! She might need a hand on the professional writing and statistics description, but the science part, she nailed solo.
Where does that leave us as readers of blogs and such? Alas, it means we have to think. The presence of a PhD is not a guarantee of correctness. Nor is the absence of one a guarantee of error. And this remains true even if we consider what area the PhD was in and the like. What is more reliable is that the older an area (surface temperature record interpretation, for instance) the less likely it is that someone can make a contribution or correction without doing quite a lot of work. The field is long past the point where it's likely that they've not noticed the urban heat island, for instance. (I haven't searched seriously, but have already run across reference to it from the early 1950s.) Blog commentators who plop this one down as if it were an ace of trump: "Ha, they didn't consider the urban heat island. Therefore, I can conclude whatever I want." or the like, can speedily be added to your list of unreliable sources. Urban heat island has been considered, quite often, for longer than they've been alive. It's an old field. Tackling ARGO buoys, less overwhelming an obstacle. (But be sure your math is up to the work! )
As younger people (those of you who are, which doesn't seem to be many, alas; but you parents remember it for your kids' sake) it means that the time to start working on doing science is today. Do your own science (meaning, learn things about the world), and try to do some professional level science too (try to learn things that nobody else has figured out yet). The heart of science is in finding things out. This doesn't have to have anything to do with what you're doing in school. But things that interest you, whatever that may be.
08 December 2008
Question Place 4
New month and I'm here, so here's a new spot for questions (plus comments and suggestions).
I'll put one out myself, on a non-climate issue (fortunately, your questions needn't be on climate either). I've been thinking a bit about my reading, and noticing that almost all of it was originally written in English. That's ok, as there's more good stuff written first in English than I can hope to read. Still, there's some awfully good writing from other modern languages. So, I'll welcome suggestions for good fiction that you've read which were originally written in other languages (but have decent translations in English). I've already got fair ideas for French, German, and Russian, and a little for Czech, Italian, and older Chinese and Japanese. But that leaves a lot of languages untouched.
Some areas of study
In writing the recent note on what a PhD means, it occurred to me that it might be worth mentioning the areas that I've taken classes in. This, after agreeing with the comment that you can't presume that a PhD person has more than a 101-level knowledge in areas outside of what they study themselves. You can't presume it, but then again, odds are good that there are some areas where a person goes above the 101 level.
My schools used peculiar numbering schemes, so I'll partition it directly by who was in the class:
Graduate level areas:
I'm leaving out a number of things because, well, I don't remember everything offhand, much less in order. But it's a sampling. One thing not missing is any lower level courses in astronomy and astrophysics -- I started with the graduate level courses. Also not missing is my courses in glaciology. I've never taken one, but my first (coauthored) paper was on the subject. I later wrote one solo on a different area of glaciology. Absence of courses is not a guarantee of absence of professional level knowledge. One thing, ideally, you learn along the way to your PhD (better if you get in practice while still in elementary school!) is how to teach yourself new subjects.
My schools used peculiar numbering schemes, so I'll partition it directly by who was in the class:
Graduate level areas:
- Astrophysics: galaxies, interstellar medium, cosmology, astrophysical jets
- Geosciences: geophysical fluid dynamics, geochemistry, atmospheric chemistry, numerical weather prediction, tides, radar meteorology, cloud physics, ...
- Engineering: Engineering fluid dynamics,
- Math: asymptotic analysis, partial differential equations, ...
- Linguistics: syntactic analysis, computational linguistics
- Paleoclimatology
- History: History of Science, Intellectual History of Western Europe
- Physics: Quantum Mechanics, Solid State Physics, Nuclear and Particle Physics
- Math: bunches, including probability, statistics, differential geometry, nasty things to do to ordinary and partial differential equations, numerical ways of beating on such equations and systems of equations
- Physical chemistry
- ... and probably several more
I'm leaving out a number of things because, well, I don't remember everything offhand, much less in order. But it's a sampling. One thing not missing is any lower level courses in astronomy and astrophysics -- I started with the graduate level courses. Also not missing is my courses in glaciology. I've never taken one, but my first (coauthored) paper was on the subject. I later wrote one solo on a different area of glaciology. Absence of courses is not a guarantee of absence of professional level knowledge. One thing, ideally, you learn along the way to your PhD (better if you get in practice while still in elementary school!) is how to teach yourself new subjects.
05 December 2008
National Academies Survey
The National Academies (US, for science and for engineering) are holding a survey to see what it is that people are interested in hearing about in science and engineering. The choices are limited, but that makes for an easy poll to answer. Less than 2 minutes for me. Maybe 30 seconds.
http://www.surveygizmo.com/s/75757/what-matters-most-to-you-iv
http://www.surveygizmo.com/s/75757/what-matters-most-to-you-iv
04 December 2008
What does a PhD mean
The trivial answer to the subject question is 'Doctor of Philosophy', which doesn't help us much. I was prompted to write about it by responses to Chris's comments on the pathetic petition over on Chris Colose's blog, wherein a reader seemed to think that once one had a PhD, one had received a grant of omniscience.
Ok, not quite. Rather, to quote Stephen (11 June 2008) directly: "...a person with a PhD is more apt to think critically before making a decision. Granted, it’s not true in every case, obviously, but someone with a PhD in any scientific field is statistically more likely to look at all of the information available to them." Unfortunately he never gave us a pointer to where those statistics were gathered -- the ones that supported his claim that PhDs in a scientific field were 'statistically more likely' .... I'm minded of the observation that 84.73% of all statistics on the net are made up.
The details of what a PhD means vary by advisor, school, and era. But for what's at hand, the finer details don't matter. One description of doctoral requirements is "an original contribution to human knowledge". This much being true whether we're talking about science or literature. The resulting contribution should (more so these days than a century ago) be publishable, and published, in the professional literature. Notions vary, but there's also a principle that someone who earns (or is a candidate to receive) a PhD should conduct the work with significantly less guidance than an MS candidate. And far less than an undergraduate. Again, true whether science or literature.
One thing you don't see there is 'more apt to think critically' about everything they comment on. You also won't find 'look at all information' about everything. The about everything is my addition, not the exact quote. But for the comments to be meaningful, they have to apply to the specific thing at hand, whatever it is that's at hand, whether it's the pathetic petition or other things allegedly about science.
The posession of a doctorate says, instead, that the owner is likely to be capable of making an original contribution to knowledge, without too much guidance from someone else. This is a pretty good sign. But it hardly means that the owner has been turned in to Mr. Spock. PhD holders are human still. We all have capabilities, PhD or no. And we humans don't always exercise the highest of our abilities. The area where you can bet (if not guarantee) that a PhD holder is more apt to think critically and consider all information is the area of their professional work. Outside that ... you're much better off to either ask if they brought full ability to bear, or to assume that they didn't.
In saying that, remember, I do have a PhD myself. It is possible that a PhD-holder is bringing full abilities to bear. If so, then they can fare better than most non-PhDs in evaluating things which claim to be science. I did, for example, take such a look at a couple of different scientific papers regarding left-handedness (I'm left-handed and interested in the topic). I thought they were very bad, for a number of reasons of 'how you do science'. The serious work, however, was done by the people who were in the field (PhD or no) and wrote the rebuttal papers for the peer-reviewed literature. They named many things that I got, and many more besides. Being a scientist got me about 1/3rd of the way through the list of errors that the original authors had committed.
On the other hand, many of the things I looked at and for in evaluating those papers were things I'm discussing here and seriously believe a jr. high student can learn to apply regularly.
So where are we? As Chris suggested in his response to Stephen: "From experience with my professors though, I wouldn’t ask many of them a question outside of their field, at least beyond 101 level stuff, so maybe not. But at the same time, none of them would go off signing petitions about things they know little about." Those are both good rules of thumb. Outside the professional field, a PhD-holder can't be presumed to know more than (or, depending on field, even) 101 level stuff. But most of us know this about ourselves, so toss the junk mail petitions where they belong when they arrive.
Ok, not quite. Rather, to quote Stephen (11 June 2008) directly: "...a person with a PhD is more apt to think critically before making a decision. Granted, it’s not true in every case, obviously, but someone with a PhD in any scientific field is statistically more likely to look at all of the information available to them." Unfortunately he never gave us a pointer to where those statistics were gathered -- the ones that supported his claim that PhDs in a scientific field were 'statistically more likely' .... I'm minded of the observation that 84.73% of all statistics on the net are made up.
The details of what a PhD means vary by advisor, school, and era. But for what's at hand, the finer details don't matter. One description of doctoral requirements is "an original contribution to human knowledge". This much being true whether we're talking about science or literature. The resulting contribution should (more so these days than a century ago) be publishable, and published, in the professional literature. Notions vary, but there's also a principle that someone who earns (or is a candidate to receive) a PhD should conduct the work with significantly less guidance than an MS candidate. And far less than an undergraduate. Again, true whether science or literature.
One thing you don't see there is 'more apt to think critically' about everything they comment on. You also won't find 'look at all information' about everything. The about everything is my addition, not the exact quote. But for the comments to be meaningful, they have to apply to the specific thing at hand, whatever it is that's at hand, whether it's the pathetic petition or other things allegedly about science.
The posession of a doctorate says, instead, that the owner is likely to be capable of making an original contribution to knowledge, without too much guidance from someone else. This is a pretty good sign. But it hardly means that the owner has been turned in to Mr. Spock. PhD holders are human still. We all have capabilities, PhD or no. And we humans don't always exercise the highest of our abilities. The area where you can bet (if not guarantee) that a PhD holder is more apt to think critically and consider all information is the area of their professional work. Outside that ... you're much better off to either ask if they brought full ability to bear, or to assume that they didn't.
In saying that, remember, I do have a PhD myself. It is possible that a PhD-holder is bringing full abilities to bear. If so, then they can fare better than most non-PhDs in evaluating things which claim to be science. I did, for example, take such a look at a couple of different scientific papers regarding left-handedness (I'm left-handed and interested in the topic). I thought they were very bad, for a number of reasons of 'how you do science'. The serious work, however, was done by the people who were in the field (PhD or no) and wrote the rebuttal papers for the peer-reviewed literature. They named many things that I got, and many more besides. Being a scientist got me about 1/3rd of the way through the list of errors that the original authors had committed.
On the other hand, many of the things I looked at and for in evaluating those papers were things I'm discussing here and seriously believe a jr. high student can learn to apply regularly.
So where are we? As Chris suggested in his response to Stephen: "From experience with my professors though, I wouldn’t ask many of them a question outside of their field, at least beyond 101 level stuff, so maybe not. But at the same time, none of them would go off signing petitions about things they know little about." Those are both good rules of thumb. Outside the professional field, a PhD-holder can't be presumed to know more than (or, depending on field, even) 101 level stuff. But most of us know this about ourselves, so toss the junk mail petitions where they belong when they arrive.
03 December 2008
Words to beware of
Some words have good meanings in normal conversation, and different meanings in science. 'Theory' is one such. But one that is seriously hazardous to try to interpret until you know the full context is 'rapid'. For folks studying chemical reactions, that can be a femtosecond. For geologists studying tectonic processes, it can be millions of years ('rapid uplift of the Himalayan plateau').
Even within a field, say glaciology, you can be looking at a few hours (rapid breakup of an ice shelf) to a few thousand years (rapid onset of an ice age).
Other contributions of words that vary widely between fields and even within fields? We'll take 'sudden' and the like as covered by 'rapid'.
Even within a field, say glaciology, you can be looking at a few hours (rapid breakup of an ice shelf) to a few thousand years (rapid onset of an ice age).
Other contributions of words that vary widely between fields and even within fields? We'll take 'sudden' and the like as covered by 'rapid'.
01 December 2008
Plotting software
Time for some collective wisdom. What are some good, noncommercial (or at least not expensively commercial; matlab does a good job, but the price tag has 4 digits left of the decimal, 2 would be ok) plotting packages? One of the best I ever encountered was CricketGraph, but that was back in 1990 or so. They seem defunct and it's getting hard to run on modern computers. (Impossible with Mac OSX 10.5, doable in 10.4, but it came out in the era of OS 6). Mac or *nix platforms.
I'm not trying to do anything elaborate, just view some data, perhaps multiple x or y axes, logarithmic axes, put labels where I'd like (at a click, not by computing displacements, to name a flaw of GrADS). Data to come from plain text files. I suppose I can insert commas if the software insisted. I'd as soon be able to go to a few hundred thousand data points, and 10,000 or so is definitely required. If it turns out that 'grapher' (shipped with Macs) does the job, I'll register my embarrassment and ask how to make it read in a data file.
I'm not trying to do anything elaborate, just view some data, perhaps multiple x or y axes, logarithmic axes, put labels where I'd like (at a click, not by computing displacements, to name a flaw of GrADS). Data to come from plain text files. I suppose I can insert commas if the software insisted. I'd as soon be able to go to a few hundred thousand data points, and 10,000 or so is definitely required. If it turns out that 'grapher' (shipped with Macs) does the job, I'll register my embarrassment and ask how to make it read in a data file.
27 November 2008
Population density and climate
Something that struck me quite some time ago as an eyeball-quality correlation was that population density was higher in areas with more rainfall. A proper study of this should look through history and be rigorous about both quantities. As long distance large-scale trade of food became possible, it'll also be necessary to look at water requirement for food and for people separately.
But, as a start and something of a Fermi estimate, let's go with what my water company tells me is typical per-person water usage at home -- 70 gallons per day. That's approximately 250 liters. Typical annual rainfall around here is about 1 meter per year. So, if I captured all the rain that fell on an area, how large would that area have to be for me? If everyone else did the same, how many of us could live in 1 square km?
The 250 liters are what I'd use in 1 day if I were approximately 'average'. For a year, I need 365 times that much, for about 100,000 liters. 1 liter is a cube 0.1 m on a side, so for a year's water I need about 100 m^3. That'd fill 1 meter deep across a square 10 meters by 10 meters. Since I get about 1 meter rain per year, this says I need about 100 square meters. If we all caught absolutely all rain, and didn't lose any to trees, farming, industry, grass, ... this would let us go up to a population density of 10,000 per square km (about 25,000 per square mile). This is actually something like the density of the highest density large cities -- check out Chicago, New York, London, Paris, for instance.
Los Angeles gets about 0.4 m/year rain, and Phoenix about 0.2. Their densities are about 3200 and 1200, respectively (Wikipedia for both numbers) to the 4000 and 2000 we'd guess from the above. On the other hand, both illustrate the fact that cities don't rely on only the rain that falls on them. Both have extensive systems to bring water to them. This is not new; Rome, to support its population in the days of Empire, when they were also using 10s of gallons of water per day, built a tremendous system of aqueducts to bring water to the city. New York and Chicago, I know, also bring water from outside the city (again water systems, rivers, lakes).
If we figure on about 10% of the rainfall being captured for residential use, then we're down to about 1000 per square km, or about 2500 per square mile in my part of the country. These are densities comparable to what is seen over moderately large areas (entire metropolitan regions, small countries in northern and western Europe, ...) with meter per year rainfall.
There's a certain reasonability, then, to there being a relation between rainfall and population density. You can exceed that relation, but only at the expense of building a large scale water system -- and not letting people in the areas you bring water from have acess to it. Water rights have a lengthy and often not peaceful history. In any case, even if all the people are in one place, their requirement extends over a larger area, something more in accord with the 1000 per square km (for 1 m/year rainfall). For, say, the 20 million or so people in the Los Angeles area, with 0.4 m/year of rain, it says their footprint is more like 50,000 km^2, to the 4000 or so the metro area actually occupies. These are all still Fermi estimates, of course. It does point out to us, however, that urban areas likely have a footprint rather larger than the official area. Conversely, it means that they need to be concerned about weather and climate over a larger area than just their own borders.
The climate concern is ... what do you do if you get less rain (or less snow)? What do you do if the rain comes more in situations (thunderstorms) that are harder for you to capture the rainwater from? Either of these changes drives you to a lower population in your urban (including suburbs) area, or pins a new expense on you -- to build more extensive water systems, and systems able to handle greater rainfall rates. While we mostly expect rainfall to increase, we do expect that there will be areas which will see falls. Which ones, not so sure, but some. It's expected and observed that there'll be an increase in rain falling in heavy doses even where there's little change in total rainfall.
A friend mentioned a TV person-in-the-street interview during a local drought. The person wasn't concerned about the drought and low river levels because "I don't get my water from the river, I get it from the tap." I trust you all know that it went from the river through a processing system to her tap even if she didn't. (Or, in the case of Chicago, from Lake Michigan -- whose level seems to be falling more than the usual cycle.)
It seems more people don't realize that wells have the same problem. Across much of the Great Plains US (take Kansas for a central example state), farming and residences take advantage of the Ogallala Aquifer. The problem is, aquifers need recharging. That is, the water in the aquifer comes from rainfall elsewhere. With the Ogallala today, the usage exceeds the recharge rate. This is not very surprising, as the aquifer's recharge area is itself in fairly dry areas. For more, see the USGS web site, search on Ogallala Aquifer and recharge. Similar problems exist for many wells. The main difference between wells and lakes or rivers is that you can't see the levels dropping as easily.
But, as a start and something of a Fermi estimate, let's go with what my water company tells me is typical per-person water usage at home -- 70 gallons per day. That's approximately 250 liters. Typical annual rainfall around here is about 1 meter per year. So, if I captured all the rain that fell on an area, how large would that area have to be for me? If everyone else did the same, how many of us could live in 1 square km?
The 250 liters are what I'd use in 1 day if I were approximately 'average'. For a year, I need 365 times that much, for about 100,000 liters. 1 liter is a cube 0.1 m on a side, so for a year's water I need about 100 m^3. That'd fill 1 meter deep across a square 10 meters by 10 meters. Since I get about 1 meter rain per year, this says I need about 100 square meters. If we all caught absolutely all rain, and didn't lose any to trees, farming, industry, grass, ... this would let us go up to a population density of 10,000 per square km (about 25,000 per square mile). This is actually something like the density of the highest density large cities -- check out Chicago, New York, London, Paris, for instance.
Los Angeles gets about 0.4 m/year rain, and Phoenix about 0.2. Their densities are about 3200 and 1200, respectively (Wikipedia for both numbers) to the 4000 and 2000 we'd guess from the above. On the other hand, both illustrate the fact that cities don't rely on only the rain that falls on them. Both have extensive systems to bring water to them. This is not new; Rome, to support its population in the days of Empire, when they were also using 10s of gallons of water per day, built a tremendous system of aqueducts to bring water to the city. New York and Chicago, I know, also bring water from outside the city (again water systems, rivers, lakes).
If we figure on about 10% of the rainfall being captured for residential use, then we're down to about 1000 per square km, or about 2500 per square mile in my part of the country. These are densities comparable to what is seen over moderately large areas (entire metropolitan regions, small countries in northern and western Europe, ...) with meter per year rainfall.
There's a certain reasonability, then, to there being a relation between rainfall and population density. You can exceed that relation, but only at the expense of building a large scale water system -- and not letting people in the areas you bring water from have acess to it. Water rights have a lengthy and often not peaceful history. In any case, even if all the people are in one place, their requirement extends over a larger area, something more in accord with the 1000 per square km (for 1 m/year rainfall). For, say, the 20 million or so people in the Los Angeles area, with 0.4 m/year of rain, it says their footprint is more like 50,000 km^2, to the 4000 or so the metro area actually occupies. These are all still Fermi estimates, of course. It does point out to us, however, that urban areas likely have a footprint rather larger than the official area. Conversely, it means that they need to be concerned about weather and climate over a larger area than just their own borders.
The climate concern is ... what do you do if you get less rain (or less snow)? What do you do if the rain comes more in situations (thunderstorms) that are harder for you to capture the rainwater from? Either of these changes drives you to a lower population in your urban (including suburbs) area, or pins a new expense on you -- to build more extensive water systems, and systems able to handle greater rainfall rates. While we mostly expect rainfall to increase, we do expect that there will be areas which will see falls. Which ones, not so sure, but some. It's expected and observed that there'll be an increase in rain falling in heavy doses even where there's little change in total rainfall.
A friend mentioned a TV person-in-the-street interview during a local drought. The person wasn't concerned about the drought and low river levels because "I don't get my water from the river, I get it from the tap." I trust you all know that it went from the river through a processing system to her tap even if she didn't. (Or, in the case of Chicago, from Lake Michigan -- whose level seems to be falling more than the usual cycle.)
It seems more people don't realize that wells have the same problem. Across much of the Great Plains US (take Kansas for a central example state), farming and residences take advantage of the Ogallala Aquifer. The problem is, aquifers need recharging. That is, the water in the aquifer comes from rainfall elsewhere. With the Ogallala today, the usage exceeds the recharge rate. This is not very surprising, as the aquifer's recharge area is itself in fairly dry areas. For more, see the USGS web site, search on Ogallala Aquifer and recharge. Similar problems exist for many wells. The main difference between wells and lakes or rivers is that you can't see the levels dropping as easily.
26 November 2008
Science fiction and science
Many scientists are (or were) science fiction readers, and I'm no exception. Some questions are currently making rounds for the ScienceOnline '09 meeting, from http://almostdiamonds.blogspot.com/2008/11/science-and-fiction-open-call.html
and I'll take a shot at them myself:
2) Almost all SF is actually engineering, rather than science, -oriented. Some technology is developed which has some effects on society or people and then we wonder what they're going to be. Or some part of the universe (aliens, black holes, ...) drops in on our characters and we wonder how they're going to stay alive .... And so on. I don't see this in a conflict with science, and, in fact, supports well what I think are some very important attitudes for doing science or living in a society where science is important:
3) I don't use SF specifically; perhaps I'd do so more if I were teaching more. But I do take advantage of a somewhat SFnal view of the universe in doing my research. That is, I'm trying to understand, say, the earth's climate. That's only one place with one particular set of conditions. What (the SF-fan in me asks) would it be like if the earth rotated much faster, more slowly, if the sun produced less UV (hence less ozone on earth, hence less greenhouse effect in the stratosphere, hence ...?), if the earth were farther away/closer in, and so on. I can't say that it's resulted in any journal articles that I wouldn't have written anyhow, but it does make it easier for me to, say, read paleoclimate papers (the earth did rotate faster in the past, sea level has been much higher and lower than present, ...)
4) As to recommendations ... I suppose the main one would be something that SF (that I saw) didn't predict we'd be taking advantage of: Read a bunch of them, and written from different viewpoints including those which disagree strongly with your own.
and I'll take a shot at them myself:
- What is your relationship to science fiction? Do you read it? Watch it? What/who do you like and why?
- What do you see as science fiction's role in promoting science, if any? Can it do more than make people excited about science? Can it harm the cause of science?
- Have you used science fiction as a starting point to talk about science? Is it easier to talk about people doing it right or getting it wrong?
- Are there any specific science or science fiction blogs you would recommend to interested readers or writers?
2) Almost all SF is actually engineering, rather than science, -oriented. Some technology is developed which has some effects on society or people and then we wonder what they're going to be. Or some part of the universe (aliens, black holes, ...) drops in on our characters and we wonder how they're going to stay alive .... And so on. I don't see this in a conflict with science, and, in fact, supports well what I think are some very important attitudes for doing science or living in a society where science is important:
- The universe is a very interesting place (so study it)
- Understanding more about the universe can keep you alive
- Science translated to technology can affect how you live (so think about the social effects sooner rather than later)
- Problems are (generally) solvable, the universe is (often) understandable
3) I don't use SF specifically; perhaps I'd do so more if I were teaching more. But I do take advantage of a somewhat SFnal view of the universe in doing my research. That is, I'm trying to understand, say, the earth's climate. That's only one place with one particular set of conditions. What (the SF-fan in me asks) would it be like if the earth rotated much faster, more slowly, if the sun produced less UV (hence less ozone on earth, hence less greenhouse effect in the stratosphere, hence ...?), if the earth were farther away/closer in, and so on. I can't say that it's resulted in any journal articles that I wouldn't have written anyhow, but it does make it easier for me to, say, read paleoclimate papers (the earth did rotate faster in the past, sea level has been much higher and lower than present, ...)
4) As to recommendations ... I suppose the main one would be something that SF (that I saw) didn't predict we'd be taking advantage of: Read a bunch of them, and written from different viewpoints including those which disagree strongly with your own.
25 November 2008
Fermi estimate challenge
I'll invite you to help me with this challenge, namely by providing the challenges. Enrico Fermi was famous for being able to estimate physical quantities even in situations where he did not (and, possibly, nobody) knew what the actual answer was. This is enormously helpful in science. One of the things we need to know is whether the answer we got back from our observing system or calculation was reasonable. When you start working in brand new areas, it's much harder to know what is reasonable. A Fermi estimate gives you that first guess. As Fermi also was a Nobel Laureate and did a lot of creative, original work, it might be a good thing if we practiced this skill ourselves.
Fortunately, you don't have to be a Nobel Laureate to do it, and the subject needn't be one on the frontiers of human knowledge. I made some use of it, for instance, regarding the company 'Joe the Plumber' wanted to buy. (After doing so: it isn't a small company.) The classic example is to estimate how many piano tuners there are in New York city. But I've seen that one a bunch of times, and don't have a piano, so it lacks something for meaning. The reason I need your help is that anything that leaps to my mind to create an example from will be in some area that I know something about already. So, what number -- about the world -- would you like to see estimated? No 'guess what number I'm thinking of' or 'what is the tangent of a trillion'? But something about how many, how big, how hot, ... of something in the observable universe.
Fortunately, you don't have to be a Nobel Laureate to do it, and the subject needn't be one on the frontiers of human knowledge. I made some use of it, for instance, regarding the company 'Joe the Plumber' wanted to buy. (After doing so: it isn't a small company.) The classic example is to estimate how many piano tuners there are in New York city. But I've seen that one a bunch of times, and don't have a piano, so it lacks something for meaning. The reason I need your help is that anything that leaps to my mind to create an example from will be in some area that I know something about already. So, what number -- about the world -- would you like to see estimated? No 'guess what number I'm thinking of' or 'what is the tangent of a trillion'? But something about how many, how big, how hot, ... of something in the observable universe.
19 November 2008
Large population countries
Partly because of my recent trip(s) to China, but also as a continuation of an idea I started in describing the ocean and atmospheres, I'll take a look at national populations today. In terms of the oceans and atmospheres, I looked at what they were composed of, and found that rather few elements were sufficient to cover a large fraction of each. It's a larger group for countries, but, still, fairly few (less than 10%) of the countries are required to cover over half the population of the world. The figures here are 2005 numbers from the 2006 Information Please almanac. They'll have changed since then, of course, but more on that later.
From a world population of about 6.6 billion, countries with more than 1% of the world's population are:
One thing this suggests to me is that for modern citizenship, history, geography classes, it would be a good idea to learn some specifically about these countries. If only some bare elements of things like capitals, languages, religions, a bit of history, etc. In bygone days (i.e., when I was in elementary school), we did do that sort of thing, but only for the US plus western Europe. You'll notice up there that only 1 western European country is on the list. In other words, such an education didn't do much good towards living in the world I find myself in. Actually, my school did cover most of the rest of the list but we were distinctly odd for our time and area.
The up side of learning about this set is that, while it covers a large fraction of the world's people, the list is short.
If I drew up the comparable list for, say, 1939, you'd find far more European countries present. If I did it for 1880 or so, it would be even more Europe-heavy. I'll get to those at a later date.
A different list would show up if I did it in terms of the global economy, one much heavier on Europe. But you'd see many of the same contries on that list. 4 of the G8 are already on this listing. And a somewhat different list would show up by listing countries in terms of land area. Again, though, most of the largest are already given (in rough order, the biggest are Russia -- huge, Canada, US, China, Brazil, Australia -- all very large, India, ... 5 of the 7 are already shown above).
If anyone would like to take on constructing comparable lists -- countries with 1% or more of world GDP, countries with 1% or more of world land area, countries with 1% or more of world ocean EEZ area (Indonesia moves way up!) inside their exclusive economic zones (EEZ) -- please do send it in. I'll get there one of these days, but, as my recent posting rate suggests, this can be a while.
The European Union presents some problems for such list-making. The current EU-27 represents about 500 million people, so would be number 3 on the above list (and take Germany off it), and be a 'country' larger than India in land area. On the other hand it isn't exactly a country. I just finished reading Postwar: A history of Europe from 1945 to the Present by Tony Judt. 'What is Europe' is a more interesting question than I'd thought.
From a world population of about 6.6 billion, countries with more than 1% of the world's population are:
- China, 1300 million
- India, 1100
- USA 295
- Indonesia 242
- Brazil 186
- Pakistan 162
- Bangladesh 144
- Russia 143
- Nigeria 128
- Japan 127
- Mexico 106
- Philipines 88
- Vietnam 84
- Germany 82
- Egypt 77.5
- Ethiopia 73.1
- Turkey 69.7
- Iran 68.0
One thing this suggests to me is that for modern citizenship, history, geography classes, it would be a good idea to learn some specifically about these countries. If only some bare elements of things like capitals, languages, religions, a bit of history, etc. In bygone days (i.e., when I was in elementary school), we did do that sort of thing, but only for the US plus western Europe. You'll notice up there that only 1 western European country is on the list. In other words, such an education didn't do much good towards living in the world I find myself in. Actually, my school did cover most of the rest of the list but we were distinctly odd for our time and area.
The up side of learning about this set is that, while it covers a large fraction of the world's people, the list is short.
If I drew up the comparable list for, say, 1939, you'd find far more European countries present. If I did it for 1880 or so, it would be even more Europe-heavy. I'll get to those at a later date.
A different list would show up if I did it in terms of the global economy, one much heavier on Europe. But you'd see many of the same contries on that list. 4 of the G8 are already on this listing. And a somewhat different list would show up by listing countries in terms of land area. Again, though, most of the largest are already given (in rough order, the biggest are Russia -- huge, Canada, US, China, Brazil, Australia -- all very large, India, ... 5 of the 7 are already shown above).
If anyone would like to take on constructing comparable lists -- countries with 1% or more of world GDP, countries with 1% or more of world land area, countries with 1% or more of world ocean EEZ area (Indonesia moves way up!) inside their exclusive economic zones (EEZ) -- please do send it in. I'll get there one of these days, but, as my recent posting rate suggests, this can be a while.
The European Union presents some problems for such list-making. The current EU-27 represents about 500 million people, so would be number 3 on the above list (and take Germany off it), and be a 'country' larger than India in land area. On the other hand it isn't exactly a country. I just finished reading Postwar: A history of Europe from 1945 to the Present by Tony Judt. 'What is Europe' is a more interesting question than I'd thought.
12 November 2008
Back again
Back from China ok, aside from a cold. The trip was good, my first time in an Official Delegation situation. I believe I managed ok, using my chopsticks reasonably and toasting appropriately, if on the lighter end. (Fortunately, my counterparts were also on the lighter end by preference so it all worked out.)
I have some comments in queue and some going back a ways that I've meant to respond to. As my eyes quit watering, etc., I'll be getting back up to speed.
I have some comments in queue and some going back a ways that I've meant to respond to. As my eyes quit watering, etc., I'll be getting back up to speed.
05 November 2008
Off to China
It'll be even quieter here for the next week or so. I'm travelling to China for the World Ocean Week. As I've mentioned, science is international. That includes going places to talk to people. If all goes well, we'll find some interesting areas for collaboration to improve both countries' ocean work.
04 November 2008
Election Day
US readers, please do get out and vote. If you have to endure some rain, well, you're not water soluble.
01 November 2008
Happy 30th to sea ice
A belated happy 30th birthday to our continuous* record of sea ice coverage from satellite! 26 October 1978 is the first data from the SMMR instrument, so last weekend the record finally hit 30 years.
* Ok, not exactly continuous, there's a gap between SMMR and the first SSMI to follow. But it's only a matter of (quite a few) weeks, rather than years as happened between ESMR and SMMR.
* Ok, not exactly continuous, there's a gap between SMMR and the first SSMI to follow. But it's only a matter of (quite a few) weeks, rather than years as happened between ESMR and SMMR.
30 October 2008
Pielke's poor summary of sea ice
I was amazed to see the following quote from Roger Pielke Sr. in an interview published yesterday in Mother Jones
Roger A. Pielke, Sr.:
It's usually the case that if you look at the data yourself, the picture is more complicated than 'what you typically hear about'. (It also making some difference where you usually listen.) So that's a noncomment.
It's shocking, however, to hear someone who says he is looking at the data arrive at the conclusion that 'sea ice has been fairly close to average'. A brief visit to Cryosphere Today, a site well worth a long visit and run by a fellow (William Chapman) who is a scientist who studies sea ice (which isn't Pielke's area), quickly takes you to the anomaly graphs for the northern hemisphere and the southern hemisphere. The northern hemisphere is far (about a million square km) below the climatology, and has been below that average continually since early 2003. The trend in this curve became apparent years ago (compare the scatter in the first 21 years to the difference between 0 anomaly and where the northern hemisphere has been the last 5 years). The southern hemisphere trend, which is there and positive (towards more ice) only recently emerged from background noise. Compare the current value (eyeball of about +0.3 million km^2 as I write on the 30th of October) to the scatter (eyeball value of about 0.5 million km^2) for the Antarctic and you see why it's taken so long for a trend to emerge from noise).
So on one hand, we have the Arctic ice, which is well (a couple of standard deviations, by eye) below normal, and has been below normal continually for over 5 years. On the other hand, we have the Antarctic, which shows a statistically weak trend and has been bouncing back and forth across normal every year of the record.
However one may describe this for the global net effect, 'fairly close to average' isn't an option.
I'm emailing this to Dr. Pielke once I find an address for him. I'm hoping that he simply was quoted exceedingly badly.
Roger A. Pielke, Sr.:
In terms of sea ice, if you look at Antarctic sea ice, it actually has been well above average, although in the last couple days it's close to average, but for about a year or longer, it's been well above average, and the Arctic sea ice is not as low as it was last year. So in the global context, the sea ice has been fairly close to average. It doesn't mean it can't happen because we are altering the climate system. But whenever I look at the data, I see a much more complicated picture than what you typically hear about.
It's usually the case that if you look at the data yourself, the picture is more complicated than 'what you typically hear about'. (It also making some difference where you usually listen.) So that's a noncomment.
It's shocking, however, to hear someone who says he is looking at the data arrive at the conclusion that 'sea ice has been fairly close to average'. A brief visit to Cryosphere Today, a site well worth a long visit and run by a fellow (William Chapman) who is a scientist who studies sea ice (which isn't Pielke's area), quickly takes you to the anomaly graphs for the northern hemisphere and the southern hemisphere. The northern hemisphere is far (about a million square km) below the climatology, and has been below that average continually since early 2003. The trend in this curve became apparent years ago (compare the scatter in the first 21 years to the difference between 0 anomaly and where the northern hemisphere has been the last 5 years). The southern hemisphere trend, which is there and positive (towards more ice) only recently emerged from background noise. Compare the current value (eyeball of about +0.3 million km^2 as I write on the 30th of October) to the scatter (eyeball value of about 0.5 million km^2) for the Antarctic and you see why it's taken so long for a trend to emerge from noise).
So on one hand, we have the Arctic ice, which is well (a couple of standard deviations, by eye) below normal, and has been below normal continually for over 5 years. On the other hand, we have the Antarctic, which shows a statistically weak trend and has been bouncing back and forth across normal every year of the record.
However one may describe this for the global net effect, 'fairly close to average' isn't an option.
I'm emailing this to Dr. Pielke once I find an address for him. I'm hoping that he simply was quoted exceedingly badly.
19 October 2008
Discussion: A role for atmospheric CO2 in preindustrial climate forcing
Some climate spinners are no doubt having a field day with the van Hoof et al. paper Steve Bloom pointed us to. It does, after all, say something critical of the IPCC. But if you read the paper itself, you will see that spinners shouldn't be happy; a conclusion I get in reading the paper is that climate is less sensitive to solar and volcanic variations, and CO2 is more variable than previously thought.
Let us take a look at the content. I hope you already have, as I encouraged when Steve first mentioned it. In doing it myself, I'm largely reading it as a nonspecialist. While I have studied more things outside physical oceanography than typical for a physical oceanographer (on the scale a things, by the way, a fairly broadly educated bunch), the biology of plant stomata is not on that list. On the other hand, a good knowledge of how science works gets you pretty far, and you don't need to be a scientist for that.
The central experimental idea is that the density of stomata in plant leaves goes up when there is less CO2 in the atmosphere, and down when there is more. (Stoma being 'mouth' and stomata being a bunch of mouths -- the leaves breathe air in through these mouths.) The Oak (genus Quercus) has been used to infer older CO2 levels before, and these authors do so again. It's more to the novel side that they're trying to infer much shorter term variations than has typically been done before. But even that (their citations 20-24) is not entirely new by now. If you can find Oak leaves (say in swamps) and date when they're from, you then have a way of reconstructing past CO2 levels in the atmosphere that is entirely independent of ice cores. You might also have a method which doesn't have the averaging and delay problems that ice cores have. On the other hand, you have a method which probably has other problems. (All data have problems. That isn't the question; whether the problems affect your conclusions is the question.) The prime novelty in this paper is now to apply the method to the last 1000 years and consider what it may tell us about the climate system.
A quick question is how reliable the method might be. In results and discussion, and figure 1, we get an idea. On the counting stomata side, we're looking at something whose standard deviation ranges up to almost 18 ppmv (parts per million by volume -- the usual unit for CO2 concentrations; the v is often left off), with an average over the whole set of about 6 ppmv. Since the authors are looking at their largest signal being 34 ppmv, the 18 ppmv standard deviation is not small, though the 6 ppmv should be good enough. So we'll set a reminder to ourselves to see if the conclusion depends sensitively on this '34'. Turns out that a conclusion relies on 34 being different from 12; so the 6 ppmv standard deviation is definitely good enough.
In figure 1b, we're shown the regression scatter plot between stomatal index and CO2 values. Though it isn't mentioned this way, eyeball suggests that the CO2 levels inferred have less scatter at low CO2 levels (higher stomatal index levels). My being a nonprofessional, though, means that there might be some obvious, to a professional, bit of biology which says that I'm over-reading the graph. Taking the figure as given shows why there's typically a substantial standard deviation in the inferred CO2 levels -- the stomatal index doesn't have a tight correspondence to CO2.
So we have a bit of a concern about how faithful a record the oak leaves give. The authors address this by looking at how the oak leaf record they construct compares to the ice core record from Antarctica, after processing it through a filter which has a similar averaging and delay behavior. The result (figure 1d, red vs. blue curves) shows pretty good agreement. Though there's some wide error bars to use (the gray band), the two curves from wildly different sources agree pretty well in the few decades and up time scale. In saying 'few decades and up', what it means is that every single bump on the curves don't line up exactly. But if you look at averages over 30-50 years, they do compare pretty closely. Obviously an area for research is to attack exactly why there are any differences at all.
Now let's turn to the significance of the work if we take the reconstruction as given. As nonprofessionals, we can't go much farther now with whether we should do so, but we've got some pointers to ourselves about what to look at further if we were of a mind to pursue it. The ice cores, partly due to their time-averaging, only show a variation in the period discussed (1000-1500) of at most 12 ppmv. That, versus the 34 ppmv difference the authors find from their different recorder -- one we might expect to have much finer time resolution. If free variations of CO2 are much larger than previously thought, then more of the climate would be CO2-driven than previously thought. This continues an idea (as far as I know) William Ruddiman started (see citation 13). Tripling the CO2 contribution in the pre-industrial period also means that prior estimates of climate sensitivity to volcanoes and solar variations would have been overestimates.
This is why folks who are going to leap on 'IPCC was wrong' parts of the paper really shouldn't be happy. The conclusion is that climate is less sensitive to solar and volcanic, and that the natural carbon cycle is more prone to variation. Independently of any of this, however, we know that the recent 100 ppm rise was due to human activity. (See Jan Schloerer's CO2 rise FAQ if you wonder about this.) Whatever caused the 34 ppmv variation observed in this study hasn't yet been going on.
Now let's see if we know anything outside the paper that is relevant. The parts inside look reasonable. The prime thing which struck me is the 34 ppmv variation, occurring in only 120 or so years (1200 to 1320 or so by eye). That's a lot of CO2 to be released in a short period by natural means. The authors mention that it's contemporary with a warming of the North Atlantic, which is the right sign of change -- warmer water holds less CO2. But they don't present a quantitative argument about the water being enough warmer over a large enough part of the world to have released that much CO2. Time and space are limited in a PNAS paper so this isn't the issue it would be in a different source. But it's something to pursue. Prompted by this, though, I arrive at a different biological question. Or geological; or both. That is, the places where Oak leaves get buried in such a way that you can dig them up 1000 years later to analyze stomata have to be pretty special. Could such a special locale exert a local effect, say that when it's warmer stuff decomposes faster -- releasing more CO2 but only mattering to local trees? If so, then some portion of the 34 vs. 12 ppmv signal is a local effect, and the discrepancy between leaves and ice cores is reduced. Then again, this may not be a factor. Not knowing the biology leaves me in that position of needing to find more informed sources.
To that end, I'm sending this blog note and address to the corresponding author and inviting him to respond either on the blog or by email. Hank Roberts (you've seen him here a few times) brought up the idea elsewhere, that it might be a good thing for people to let authors know when their science is being discussed in a blog. Thanks for the idea Hank.
You should also see a new icon next to this note. It's from Research Blogging, and the idea is to tag blog notes about scientific papers. Then readers who'd like to see research-oriented blogging can go to the main site and have a summary feed of such postings.
Let us take a look at the content. I hope you already have, as I encouraged when Steve first mentioned it. In doing it myself, I'm largely reading it as a nonspecialist. While I have studied more things outside physical oceanography than typical for a physical oceanographer (on the scale a things, by the way, a fairly broadly educated bunch), the biology of plant stomata is not on that list. On the other hand, a good knowledge of how science works gets you pretty far, and you don't need to be a scientist for that.
The central experimental idea is that the density of stomata in plant leaves goes up when there is less CO2 in the atmosphere, and down when there is more. (Stoma being 'mouth' and stomata being a bunch of mouths -- the leaves breathe air in through these mouths.) The Oak (genus Quercus) has been used to infer older CO2 levels before, and these authors do so again. It's more to the novel side that they're trying to infer much shorter term variations than has typically been done before. But even that (their citations 20-24) is not entirely new by now. If you can find Oak leaves (say in swamps) and date when they're from, you then have a way of reconstructing past CO2 levels in the atmosphere that is entirely independent of ice cores. You might also have a method which doesn't have the averaging and delay problems that ice cores have. On the other hand, you have a method which probably has other problems. (All data have problems. That isn't the question; whether the problems affect your conclusions is the question.) The prime novelty in this paper is now to apply the method to the last 1000 years and consider what it may tell us about the climate system.
A quick question is how reliable the method might be. In results and discussion, and figure 1, we get an idea. On the counting stomata side, we're looking at something whose standard deviation ranges up to almost 18 ppmv (parts per million by volume -- the usual unit for CO2 concentrations; the v is often left off), with an average over the whole set of about 6 ppmv. Since the authors are looking at their largest signal being 34 ppmv, the 18 ppmv standard deviation is not small, though the 6 ppmv should be good enough. So we'll set a reminder to ourselves to see if the conclusion depends sensitively on this '34'. Turns out that a conclusion relies on 34 being different from 12; so the 6 ppmv standard deviation is definitely good enough.
In figure 1b, we're shown the regression scatter plot between stomatal index and CO2 values. Though it isn't mentioned this way, eyeball suggests that the CO2 levels inferred have less scatter at low CO2 levels (higher stomatal index levels). My being a nonprofessional, though, means that there might be some obvious, to a professional, bit of biology which says that I'm over-reading the graph. Taking the figure as given shows why there's typically a substantial standard deviation in the inferred CO2 levels -- the stomatal index doesn't have a tight correspondence to CO2.
So we have a bit of a concern about how faithful a record the oak leaves give. The authors address this by looking at how the oak leaf record they construct compares to the ice core record from Antarctica, after processing it through a filter which has a similar averaging and delay behavior. The result (figure 1d, red vs. blue curves) shows pretty good agreement. Though there's some wide error bars to use (the gray band), the two curves from wildly different sources agree pretty well in the few decades and up time scale. In saying 'few decades and up', what it means is that every single bump on the curves don't line up exactly. But if you look at averages over 30-50 years, they do compare pretty closely. Obviously an area for research is to attack exactly why there are any differences at all.
Now let's turn to the significance of the work if we take the reconstruction as given. As nonprofessionals, we can't go much farther now with whether we should do so, but we've got some pointers to ourselves about what to look at further if we were of a mind to pursue it. The ice cores, partly due to their time-averaging, only show a variation in the period discussed (1000-1500) of at most 12 ppmv. That, versus the 34 ppmv difference the authors find from their different recorder -- one we might expect to have much finer time resolution. If free variations of CO2 are much larger than previously thought, then more of the climate would be CO2-driven than previously thought. This continues an idea (as far as I know) William Ruddiman started (see citation 13). Tripling the CO2 contribution in the pre-industrial period also means that prior estimates of climate sensitivity to volcanoes and solar variations would have been overestimates.
This is why folks who are going to leap on 'IPCC was wrong' parts of the paper really shouldn't be happy. The conclusion is that climate is less sensitive to solar and volcanic, and that the natural carbon cycle is more prone to variation. Independently of any of this, however, we know that the recent 100 ppm rise was due to human activity. (See Jan Schloerer's CO2 rise FAQ if you wonder about this.) Whatever caused the 34 ppmv variation observed in this study hasn't yet been going on.
Now let's see if we know anything outside the paper that is relevant. The parts inside look reasonable. The prime thing which struck me is the 34 ppmv variation, occurring in only 120 or so years (1200 to 1320 or so by eye). That's a lot of CO2 to be released in a short period by natural means. The authors mention that it's contemporary with a warming of the North Atlantic, which is the right sign of change -- warmer water holds less CO2. But they don't present a quantitative argument about the water being enough warmer over a large enough part of the world to have released that much CO2. Time and space are limited in a PNAS paper so this isn't the issue it would be in a different source. But it's something to pursue. Prompted by this, though, I arrive at a different biological question. Or geological; or both. That is, the places where Oak leaves get buried in such a way that you can dig them up 1000 years later to analyze stomata have to be pretty special. Could such a special locale exert a local effect, say that when it's warmer stuff decomposes faster -- releasing more CO2 but only mattering to local trees? If so, then some portion of the 34 vs. 12 ppmv signal is a local effect, and the discrepancy between leaves and ice cores is reduced. Then again, this may not be a factor. Not knowing the biology leaves me in that position of needing to find more informed sources.
To that end, I'm sending this blog note and address to the corresponding author and inviting him to respond either on the blog or by email. Hank Roberts (you've seen him here a few times) brought up the idea elsewhere, that it might be a good thing for people to let authors know when their science is being discussed in a blog. Thanks for the idea Hank.
You should also see a new icon next to this note. It's from Research Blogging, and the idea is to tag blog notes about scientific papers. Then readers who'd like to see research-oriented blogging can go to the main site and have a summary feed of such postings.
17 October 2008
Science is collaborative
I wouldn't have thought it so, but apparently it is a surprise to some (many, actually I've seen quite a few such comments before) that science is a collaborative activity.
Quoting the scientist:
The blog commentator responds:
Real scientists know that they are not omniscient. Even within your professional area, you know that you don't know everything. So, if you're contacted by some group with 'a few questions' and have a chance to do so, you run the questions and your answers past some other knowledgeable people. This is lower level stuff than hard core 'peer review', but some basic check that you weren't too focused on your own sub-sub-sub-niche at the expense of other relevant parts of the situation, and that you didn't have a thinko/typo in your answer.
There's nothing terribly special about scientists in this. Science or not, most people know they're not omniscient. I'm pretty sure that the Cubs first time in the post season since 1945 was 1984. And the next time after that was 1989. But if I were to be answering in a situation (as the above scientist was) loaded with people who think that I'm a liar before I answer, and that people in my profession are participating in some grand conspiracy, or that international decisions would depend on my answer, I'm going to do some checking with references and other Cubs fans about whether it was 1984 and 1989 or some other years. Those are probably the right years, and in a casual chat between you and me, I'd go with them. But if it mattered, time for research and checking with other knowledgeable people.
Yet, come to climate -- a big hairy mess of a system that no one person can hope to understand all of in detail -- and responses like the above are common. Somehow an individual scientist is supposed to become omniscient and not rely on checking out his answers with others. Yet any honest person as a matter of routine does so even in far less public and far less socially important situations.
Quoting the scientist:
I am speaking for myself… Thanks to Stephanie Renfrow, Ted Scambos, Mark Serreze, and Oliver Frauenfeld of NSIDC for their input.
The blog commentator responds:
The result of this “groupspeak” is unconvincing to this reader. It would have been nice to hear the real thoughts of one real man.
Real scientists know that they are not omniscient. Even within your professional area, you know that you don't know everything. So, if you're contacted by some group with 'a few questions' and have a chance to do so, you run the questions and your answers past some other knowledgeable people. This is lower level stuff than hard core 'peer review', but some basic check that you weren't too focused on your own sub-sub-sub-niche at the expense of other relevant parts of the situation, and that you didn't have a thinko/typo in your answer.
There's nothing terribly special about scientists in this. Science or not, most people know they're not omniscient. I'm pretty sure that the Cubs first time in the post season since 1945 was 1984. And the next time after that was 1989. But if I were to be answering in a situation (as the above scientist was) loaded with people who think that I'm a liar before I answer, and that people in my profession are participating in some grand conspiracy, or that international decisions would depend on my answer, I'm going to do some checking with references and other Cubs fans about whether it was 1984 and 1989 or some other years. Those are probably the right years, and in a casual chat between you and me, I'd go with them. But if it mattered, time for research and checking with other knowledgeable people.
Yet, come to climate -- a big hairy mess of a system that no one person can hope to understand all of in detail -- and responses like the above are common. Somehow an individual scientist is supposed to become omniscient and not rely on checking out his answers with others. Yet any honest person as a matter of routine does so even in far less public and far less socially important situations.
16 October 2008
Computers in climate
Earlier I talked a bit about the math and science classes you would probably take if you went to study climate. Somehow, perhaps because they're so ubiquitous, I forgot to mention anything about the computer end of things. On the other hand, they are ubiquitous, so I should catch up a bit and mention the computer hardware, operating systems, software packages, and programming languages you might run in to, or that I have, in working on climate modelling and data analysis. A different part: the computers are just tools. Being good with computers (knowing many languages, whatever) is like being good with a hammer. Being good with hammers doesn't make you a carpenter, nor does being good with computers make you a scientist.
The hardware is pretty much anything you can find. For small models or data sets, a single processor desktop is still used. Small being defined as 'what you can do on a single processor desktop system'. Given that they're about a million times more powerful than the desktops of 30 years ago, this is actually not a minor set. In intermediate ranges are multiprocessor desktops or workstations (or at least what we used to call workstations; the distinction seems far less common now), up to a few dozen home-style processors. I see that you can now get at home, if not cheaply, 8 processor systems with 8 Gb memory. The first computer I worked on had 8 Kb of memory. These mid-range systems can do substantial work, particularly if used well. At the high end, you're looking at hundreds to thousands of processors, or vector processors. The latter were the domain of Cray in the mid 70s-90s; NEC started producing them as well. They had (from the later 80s) multiple processors, but the number was fairly small. The power of such systems was that each processor could do the same thing many time (16-32 in the early 90s). Since our models tend to do just that, this can be an effective design.
Operating systems have often been a matter of whatever the vendor shipped. In the 70s and 80s, this was often home-grown at the vendor's. These days for larger systems it's almost always some flavor of Unix or related. For smaller systems, it's hardware-based (Mac, which these days is also Unix-based), Unix-related (Linux, BSD), Windows, or other, what seem to be more regional systems (Acorn?).
Programming languages are often a matter that people get ... let's stay 'testy' ... about. I don't really understand it myself. For the models themselves, the main language is Fortran in whatever flavor is widespread. These days 90/95. But others get used as well or instead, including C, C++, Java, and Python. For the data processing, it is usually one of these others which is most-used (mostly C, with the others increasing; at least from where I sit). The reason I don't see the problem is that I am polylingual myself (more below) and learning a new programming language just isn't a big deal and doesn't seem like it ought to be. The one significant hurdle is going between procedural languages like Fortran/C/... and object oriented languages like C++/Java/... But if you're staying on one side of that hurdle, going from one language to another is a fairly minor matter if you learned to be rigorous in the first place.
You'll also likely wind up using one or another graphics or toolbox sort of package. Some are: Matlab, IDL, R, GraDS, MacSYMA, MAPL. I'm sure there are a raft more; these are just ones I've heard of recently. As they're (mostly) commercial packages, which one gets used varies widely by what center you're at.
So a fairly typical set to know is something like:
My own hardware list (as I recall it over the decades):
Home-type computers:
Wang 8k, Apple II, Commadore 64, Mac Plus (wrote my PhD on one), Mac (IIx, IIfx, SE30, Powermac 120, PowerMac G5, Mac Pro), IBM-PC (when it really was IBM), PC-AT, 386-type, 486-type, pentium I, II, III, IV.
Workstations:
DEC PDP-8, PDP-11, VAX 11/750, VAX 11/780
HP (... never seemed to name theirs, but a 68030/68881, running HP-UX 5, and then, later, an HP-UX 10 system)
SGI Iris, Indigo, Origin,
Sun Sparc 1, 10, and a couple of Solaris systems
Big systems:
CDC 180, 195
IBM (old big-iron systems)
Cray 1, 2, X-MP, Y-MP, C-90, J-916
IBM RS/6000 (PowerPC based parallel systems PowerPC 2-6 if I remember correctly)
Operating systems:
NUCC (Northwestern University CDC system, SNOBOL-based)
NOS-Ve (CDC system of early 1980s)
COS (Cray operating system)
VAX/VMS
IBM systems: MVS, VM/CMS, ... (?)
*nix flavors: HP-UX, PDP unix, Solaris, Linux (slackware, redhat), UNICOS (Cray unix), AIX (IBM Unix), ... no doubt several more
CP/M (not really an operating system, but, for lack of a better word ...)
DOS 1.0, 3.0, 5.0, 6.0; Windows 3.1, 95, XP; Desqview X
MacOS 1-9, X
Programming Languages:
Fortran (4, 66, 5, 77, 90/95; Ratfor, Watfor, Watfiv)
C
C++
Pascal, Basic, Java
Logo, Algol, APL
Forth, Lisp
Not languages, but:
VAX/VMS assembler, 68030 assembler
And, again I wouldn't call them languages, but I use them: Perl, Javascript
... and yes, I did use punched cards. Wrestled a pterodactyl so I could use it's beak to punch out the holes!
The hardware is pretty much anything you can find. For small models or data sets, a single processor desktop is still used. Small being defined as 'what you can do on a single processor desktop system'. Given that they're about a million times more powerful than the desktops of 30 years ago, this is actually not a minor set. In intermediate ranges are multiprocessor desktops or workstations (or at least what we used to call workstations; the distinction seems far less common now), up to a few dozen home-style processors. I see that you can now get at home, if not cheaply, 8 processor systems with 8 Gb memory. The first computer I worked on had 8 Kb of memory. These mid-range systems can do substantial work, particularly if used well. At the high end, you're looking at hundreds to thousands of processors, or vector processors. The latter were the domain of Cray in the mid 70s-90s; NEC started producing them as well. They had (from the later 80s) multiple processors, but the number was fairly small. The power of such systems was that each processor could do the same thing many time (16-32 in the early 90s). Since our models tend to do just that, this can be an effective design.
Operating systems have often been a matter of whatever the vendor shipped. In the 70s and 80s, this was often home-grown at the vendor's. These days for larger systems it's almost always some flavor of Unix or related. For smaller systems, it's hardware-based (Mac, which these days is also Unix-based), Unix-related (Linux, BSD), Windows, or other, what seem to be more regional systems (Acorn?).
Programming languages are often a matter that people get ... let's stay 'testy' ... about. I don't really understand it myself. For the models themselves, the main language is Fortran in whatever flavor is widespread. These days 90/95. But others get used as well or instead, including C, C++, Java, and Python. For the data processing, it is usually one of these others which is most-used (mostly C, with the others increasing; at least from where I sit). The reason I don't see the problem is that I am polylingual myself (more below) and learning a new programming language just isn't a big deal and doesn't seem like it ought to be. The one significant hurdle is going between procedural languages like Fortran/C/... and object oriented languages like C++/Java/... But if you're staying on one side of that hurdle, going from one language to another is a fairly minor matter if you learned to be rigorous in the first place.
You'll also likely wind up using one or another graphics or toolbox sort of package. Some are: Matlab, IDL, R, GraDS, MacSYMA, MAPL. I'm sure there are a raft more; these are just ones I've heard of recently. As they're (mostly) commercial packages, which one gets used varies widely by what center you're at.
So a fairly typical set to know is something like:
- Fortran and at least one other language (C/C++/Java/Python/...)
- A Unix-related operating system plus a home system (Mac, Windows, ...)
- A graphics package
My own hardware list (as I recall it over the decades):
Home-type computers:
Wang 8k, Apple II, Commadore 64, Mac Plus (wrote my PhD on one), Mac (IIx, IIfx, SE30, Powermac 120, PowerMac G5, Mac Pro), IBM-PC (when it really was IBM), PC-AT, 386-type, 486-type, pentium I, II, III, IV.
Workstations:
DEC PDP-8, PDP-11, VAX 11/750, VAX 11/780
HP (... never seemed to name theirs, but a 68030/68881, running HP-UX 5, and then, later, an HP-UX 10 system)
SGI Iris, Indigo, Origin,
Sun Sparc 1, 10, and a couple of Solaris systems
Big systems:
CDC 180, 195
IBM (old big-iron systems)
Cray 1, 2, X-MP, Y-MP, C-90, J-916
IBM RS/6000 (PowerPC based parallel systems PowerPC 2-6 if I remember correctly)
Operating systems:
NUCC (Northwestern University CDC system, SNOBOL-based)
NOS-Ve (CDC system of early 1980s)
COS (Cray operating system)
VAX/VMS
IBM systems: MVS, VM/CMS, ... (?)
*nix flavors: HP-UX, PDP unix, Solaris, Linux (slackware, redhat), UNICOS (Cray unix), AIX (IBM Unix), ... no doubt several more
CP/M (not really an operating system, but, for lack of a better word ...)
DOS 1.0, 3.0, 5.0, 6.0; Windows 3.1, 95, XP; Desqview X
MacOS 1-9, X
Programming Languages:
Fortran (4, 66, 5, 77, 90/95; Ratfor, Watfor, Watfiv)
C
C++
Pascal, Basic, Java
Logo, Algol, APL
Forth, Lisp
Not languages, but:
VAX/VMS assembler, 68030 assembler
And, again I wouldn't call them languages, but I use them: Perl, Javascript
... and yes, I did use punched cards. Wrestled a pterodactyl so I could use it's beak to punch out the holes!
13 October 2008
Ad hominem
One of the more heavily abused terms around blogs and such is ad hominem, which is unfortunate because it is actually a useful method for weeding sources -- if you understand what it really means. The bare translation of the Latin, against the man, doesn't help us much unfortunately. But it's a start. You know something is up when someone starts talking about a person instead of the science in what is supposed to be a scientific discussion.
The classic form of an ad hominem is:
"X is a bad person, therefore they're wrong about Y".
'bad person' is usually substituted with something else ('travels', 'has a big house', ...), and the 'therefore' is often omitted. One I commonly see is, for example "AGW is a scam. Al Gore has a big house and uses a lot of electricity. "
The logical fallacy being that comments about Al Gore say exactly nothing about the science on anthropogenic global warming. But if someone says it loud enough, often enough, and maybe shouts it at you, you might get swept up into their emotional argument. They're hoping so.
My wife occasionally reminds me that there's more to the world than science. So I'll step outside that and look at the policy implications the ad hominem people want us to draw. One is obviously that they want us to think that doing anything about AGW requires us to give up having large houses and using 'a lot' of electricity. Except this is false. See my comments about keeping your vehicle how you choose for more. Part of how Gore is dealing with the carbon produced in his energy use is to buy carbon credits (the idea being that if you cause 10 units of carbon to be released, and you also cause 10 units to be drawn back out of the atmosphere, then your net effect is zero -- and you can do what you like as long as your net is zero). If that's brought up, the ad hominem folks then say something about Gore owning (part of? I don't know) the company he buys the carbon credits from. As long as the carbon is buried, though, it doesn't matter -- to the science, or honest policy -- who gets the corporate profit. Their complaint, then, is that Gore is able to make money while living large and not contributing to climate change. Huh? Shouldn't he be getting a medal for that? There are jobs to be had in carbon credit sorts of activities -- planting trees, building windmills, whatever. But that means a common whine from them -- that doing anything about climate change would bankrupt the world (talk about folks who want to scare us!) -- would also be false.
If you've got examples of people whose opinions are quite different from Gore's on climate and are regularly attacked in similarly ad hominem ways, do submit names and examples. It was a Gore ad hominem article I just saw that prompted this note. He, Jim Hansen, and Michael Mann, in that order, are the three I see the most ad hominem attacks against. No doubt a function of where I read.
Back to the science. When you see a note that talks about the person who holds an idea, rather than the idea, you can be fairly confident that the source isn't concerned about the idea. If there is no such thing as a greenhouse effect, present evidence against that; don't tell me about Al Gore's house. And so on. A google search on "al gore" agw scam brings up the following examples of folks doing the ad hominem game against Gore instead of saying anything substantive about the science), so can be added to the list of unreliable sources
http://www.newsbusters.org/blogs/noel-sheppard/2008/03/04/weather-channel-founder-sue-al-gore-expose-global-warming-fraud
(This site showd up earlier as an unreliable source -- as I mentioned early on, there's a lot of consistency in being unreliable. Once you've found one such example, you're likely to find multiple if you spend more time. That's the value of discovering at some point that a site is unreliable.)
http://www.kusi.com/weather/colemanscorner/19842304.html
John Coleman is the person quoted above.
http://www.larouchepac.com/news/2008/01/22/faked-temperature-record-behind-al-gores-genocidal-fraud.html
And you'll see quite a lot of examples in blog comments, but I excluded sites where such things only appeared in the comments.
There's a converse mistake about ad hominem that's often made (usually by the same people) -- that to say anything about a person is ad hominem. Of course they don't apply that rule consistently. For instance, it is certainly important to consider the qualifications (a personal attribute) of a speaker. So a comment that Al Gore is a (retired) politician, rather than a climate scientist, is perfectly reasonable. I don't get my science from him for that reason. If you are, then please read some science books as well, say Spencer Weart's Discovery of Global Warming. Equally, though, it's not ad hominem to observe that John Coleman is a TV weather forecaster, not a climate scientist. So it probably isn't a good idea to rely on him for your climate science either.
The classic form of an ad hominem is:
"X is a bad person, therefore they're wrong about Y".
'bad person' is usually substituted with something else ('travels', 'has a big house', ...), and the 'therefore' is often omitted. One I commonly see is, for example "AGW is a scam. Al Gore has a big house and uses a lot of electricity. "
The logical fallacy being that comments about Al Gore say exactly nothing about the science on anthropogenic global warming. But if someone says it loud enough, often enough, and maybe shouts it at you, you might get swept up into their emotional argument. They're hoping so.
My wife occasionally reminds me that there's more to the world than science. So I'll step outside that and look at the policy implications the ad hominem people want us to draw. One is obviously that they want us to think that doing anything about AGW requires us to give up having large houses and using 'a lot' of electricity. Except this is false. See my comments about keeping your vehicle how you choose for more. Part of how Gore is dealing with the carbon produced in his energy use is to buy carbon credits (the idea being that if you cause 10 units of carbon to be released, and you also cause 10 units to be drawn back out of the atmosphere, then your net effect is zero -- and you can do what you like as long as your net is zero). If that's brought up, the ad hominem folks then say something about Gore owning (part of? I don't know) the company he buys the carbon credits from. As long as the carbon is buried, though, it doesn't matter -- to the science, or honest policy -- who gets the corporate profit. Their complaint, then, is that Gore is able to make money while living large and not contributing to climate change. Huh? Shouldn't he be getting a medal for that? There are jobs to be had in carbon credit sorts of activities -- planting trees, building windmills, whatever. But that means a common whine from them -- that doing anything about climate change would bankrupt the world (talk about folks who want to scare us!) -- would also be false.
If you've got examples of people whose opinions are quite different from Gore's on climate and are regularly attacked in similarly ad hominem ways, do submit names and examples. It was a Gore ad hominem article I just saw that prompted this note. He, Jim Hansen, and Michael Mann, in that order, are the three I see the most ad hominem attacks against. No doubt a function of where I read.
Back to the science. When you see a note that talks about the person who holds an idea, rather than the idea, you can be fairly confident that the source isn't concerned about the idea. If there is no such thing as a greenhouse effect, present evidence against that; don't tell me about Al Gore's house. And so on. A google search on "al gore" agw scam brings up the following examples of folks doing the ad hominem game against Gore instead of saying anything substantive about the science), so can be added to the list of unreliable sources
http://www.newsbusters.org/blogs/noel-sheppard/2008/03/04/weather-channel-founder-sue-al-gore-expose-global-warming-fraud
(This site showd up earlier as an unreliable source -- as I mentioned early on, there's a lot of consistency in being unreliable. Once you've found one such example, you're likely to find multiple if you spend more time. That's the value of discovering at some point that a site is unreliable.)
http://www.kusi.com/weather/colemanscorner/19842304.html
John Coleman is the person quoted above.
http://www.larouchepac.com/news/2008/01/22/faked-temperature-record-behind-al-gores-genocidal-fraud.html
And you'll see quite a lot of examples in blog comments, but I excluded sites where such things only appeared in the comments.
There's a converse mistake about ad hominem that's often made (usually by the same people) -- that to say anything about a person is ad hominem. Of course they don't apply that rule consistently. For instance, it is certainly important to consider the qualifications (a personal attribute) of a speaker. So a comment that Al Gore is a (retired) politician, rather than a climate scientist, is perfectly reasonable. I don't get my science from him for that reason. If you are, then please read some science books as well, say Spencer Weart's Discovery of Global Warming. Equally, though, it's not ad hominem to observe that John Coleman is a TV weather forecaster, not a climate scientist. So it probably isn't a good idea to rely on him for your climate science either.
12 October 2008
Question Place 3
Have questions about science, especially if some relating to oceanography, meteorology, glaciology, climate? Here's a place to put them. For that matter, about running as well. Maybe I'll answer them here, and maybe (as happened back when with Dave's note back in May, or Bart's in September) they'll prompt a post or three in their own right.
11 October 2008
We're all related
On Greg Laden's Blog, I've been joining the discussion a bit about race and its meaninglessness as a biological thing for humans. If you want to take that up, please join that thread. Here, I'm taking up the more genealogical side of relationship.
I've mentioned before that I've been doing some genealogy. This has included running mitochondrial and Y chromosome DNA samples. (Well, having someone else do so :-) The mitochondria pass solely from mother to child, so it gives a pointer on where my strictly matrilineal line comes from. The Y chromosomes pass strictly from father to son, so points to where the strictly patrilineal side comes from. In both cases 'points' is a generous description for something that really means 'within a few thousand miles, give or take a lot'. For the strictly patrilineal, I already 'know' the location outside the US (colonies as it turned out) -- Leonhart Krumbein of Strasbourg, who came in 1754 on the ship Brothers.
For the strictly matrilineal side, I had some hope of an interesting result. That side runs back to the early history of Maryland, and then is lost. Could have been that one of the women was a Native American. I'd have liked that. Unfortunately, both sides point to the most common haplogroups in Europe (H for the maternal side, R for the paternal). So no great surprises or novelty there. Oh well. It also established (not that this was a question) that I'm a mutant. (So are you, don't worry.) My mitochondria differ in 7 places (out of about 1500 checked) from the reference sequence. A little looking around shows one of the mutations to be fairly uncommon, even though the rest are very common.
By the time we're looking at 10 generations back (which is where I've tracked, loosely, the matrilineal side to), we've got 1024 ancestors. These tests only speak to 2 of those 1024. Quite a lot of variety could exist in the remaining 1022, but it won't show up in these two parts of our genome. At 20 generations back (about 1350), we've got about a million ancestors. That starts to get interesting, as it gets to be comparable to the population of significant areas (all of England at the time, for instance). Go to 30 generations (about 1050) and there are a billion ancestors -- greater than the population of the world at the time. Make it 40 generations, about 750, back to grandpa Chuck, and we've got a trillion ancestors -- more than 2000 times the world population of the time.
Grandpa Chuck is Charlemagne, who shows up in my tree. Given world population at the time, he probably shows up at least 2000 times in my tree if the whole thing were to be discovered. Back in college a friend (hi Derek) mentioned his descent from Charlemagne. This was before I knew about pedigree collapse so I was skeptical. Now that I do, it's more the converse -- it'd take evidence to show that you (for any of you, anywhere in the world) do not share him as an ancestor. Pedigree collapse is this business that as you go back in time, world population drops, but your number of ancestors keeps increasing. You hit a point where the same person must show up multiple times in your ancestry.
Related is that with a trillion slots to fill in your ancestry, even some connections that may seem intuitively unlikely are certain to happen. Intuition isn't a very good guide when numbers get large (among other times).
So I already know my real list of relatives -- everybody, everywhere. The only question would be how closely related we are. Even before starting the genealogy, I'd realized this. Even after greatly extending my knowledge of who came from where, 60% of the 'where' is still unknown. I take the rest of the world in that 60%.
I've mentioned before that I've been doing some genealogy. This has included running mitochondrial and Y chromosome DNA samples. (Well, having someone else do so :-) The mitochondria pass solely from mother to child, so it gives a pointer on where my strictly matrilineal line comes from. The Y chromosomes pass strictly from father to son, so points to where the strictly patrilineal side comes from. In both cases 'points' is a generous description for something that really means 'within a few thousand miles, give or take a lot'. For the strictly patrilineal, I already 'know' the location outside the US (colonies as it turned out) -- Leonhart Krumbein of Strasbourg, who came in 1754 on the ship Brothers.
For the strictly matrilineal side, I had some hope of an interesting result. That side runs back to the early history of Maryland, and then is lost. Could have been that one of the women was a Native American. I'd have liked that. Unfortunately, both sides point to the most common haplogroups in Europe (H for the maternal side, R for the paternal). So no great surprises or novelty there. Oh well. It also established (not that this was a question) that I'm a mutant. (So are you, don't worry.) My mitochondria differ in 7 places (out of about 1500 checked) from the reference sequence. A little looking around shows one of the mutations to be fairly uncommon, even though the rest are very common.
By the time we're looking at 10 generations back (which is where I've tracked, loosely, the matrilineal side to), we've got 1024 ancestors. These tests only speak to 2 of those 1024. Quite a lot of variety could exist in the remaining 1022, but it won't show up in these two parts of our genome. At 20 generations back (about 1350), we've got about a million ancestors. That starts to get interesting, as it gets to be comparable to the population of significant areas (all of England at the time, for instance). Go to 30 generations (about 1050) and there are a billion ancestors -- greater than the population of the world at the time. Make it 40 generations, about 750, back to grandpa Chuck, and we've got a trillion ancestors -- more than 2000 times the world population of the time.
Grandpa Chuck is Charlemagne, who shows up in my tree. Given world population at the time, he probably shows up at least 2000 times in my tree if the whole thing were to be discovered. Back in college a friend (hi Derek) mentioned his descent from Charlemagne. This was before I knew about pedigree collapse so I was skeptical. Now that I do, it's more the converse -- it'd take evidence to show that you (for any of you, anywhere in the world) do not share him as an ancestor. Pedigree collapse is this business that as you go back in time, world population drops, but your number of ancestors keeps increasing. You hit a point where the same person must show up multiple times in your ancestry.
Related is that with a trillion slots to fill in your ancestry, even some connections that may seem intuitively unlikely are certain to happen. Intuition isn't a very good guide when numbers get large (among other times).
So I already know my real list of relatives -- everybody, everywhere. The only question would be how closely related we are. Even before starting the genealogy, I'd realized this. Even after greatly extending my knowledge of who came from where, 60% of the 'where' is still unknown. I take the rest of the world in that 60%.
07 October 2008
Radiative Heating
Short example of radiative heating from today's lunch. We were at a Japanese restaurant and they did the usual bit of cleaning the surface and then spraying it down with something flammable and igniting it. We noticed that we felt the heat even from the burners 20-30 feet away as the big whoosh of flame went up.
I've mentioned before that there are three methods of moving heat around -- conduction (molecules bouncing in to each other; very slow), convection (carrying the hot air from one place to another, and radiation. Now hot air rises, and we were not sitting above the burners! So what is left is radiation. The short-lived, but hot, sheet of flame radiated some heat over to us 20-30 feet away.
Some fireplaces take advantage of the principle by using a 'firebrick' which absorbs heat and then efficiently radiates it back in to the room, rather than letting it go with the air up the chimney..
Anyone else have a daily experience with (non-solar) heat transfer by radiation?
I've mentioned before that there are three methods of moving heat around -- conduction (molecules bouncing in to each other; very slow), convection (carrying the hot air from one place to another, and radiation. Now hot air rises, and we were not sitting above the burners! So what is left is radiation. The short-lived, but hot, sheet of flame radiated some heat over to us 20-30 feet away.
Some fireplaces take advantage of the principle by using a 'firebrick' which absorbs heat and then efficiently radiates it back in to the room, rather than letting it go with the air up the chimney..
Anyone else have a daily experience with (non-solar) heat transfer by radiation?
29 September 2008
24 Hour Contests
My wife, Vickie, and I took part in 24 hour contests last weekend. Hers was the more formal -- a 24 hour short-story writing contest. They give a topic and the word length at noon one day, and the story is due by noon the next day. Neither of them really suited her; the length was only 850 words, which is quite short to write a good story in. But I believe she succeeded. And she discovered some more about her writing, and what she can do. So all to the good. We'll find out in a month or so what the judges thought.
My own contest was a bit more unusual, not least because there were no other contestants, no judge, and no rules. Still, it occurred to us that while Vickie was doing her 24 hours focused on writing, I could also do 24 hours focused on writing. But in my case, a science paper. Continuing with a paper I've already started and trying to finish it in 24 hours would not have been in keeping with the spirit of her contest, which was focused on novelty. I have a ton of ideas, though, lurking in the back of my mind at any given time, and several feet of them in cabinets if, for some reason, I don't like ones that are leaping to mind. So my challenge was to take one of them and give it a good hard run for 24 hours.
I didn't finish a paper, though I did get 2 good pages written. The writing was done in the first hour. (I'm a fast typist and have been thinking about this idea off and on for a few years.) Then in to the charge at data. Or, rather, the slow and careful sneaking up on data and hoping that it didn't bare its fangs and shred my idea in the first few seconds of contact.
After my 24 hours, the notion was still intact and, if anything, looking better. Didn't finish the paper, but no surprise there as actually there are quite a few papers to come from this idea. But I did make good progress on getting data and testing that the idea held up against some reasonably good counter-tests. More detail to come later, once I get a little farther. But things to be coming up here before then are the North Atlantic Oscillation, Arctic Oscillation, Pacific-North American, and Antarctic Oscillation (NAO, AO, PNA, and AAO, respectively).
My own contest was a bit more unusual, not least because there were no other contestants, no judge, and no rules. Still, it occurred to us that while Vickie was doing her 24 hours focused on writing, I could also do 24 hours focused on writing. But in my case, a science paper. Continuing with a paper I've already started and trying to finish it in 24 hours would not have been in keeping with the spirit of her contest, which was focused on novelty. I have a ton of ideas, though, lurking in the back of my mind at any given time, and several feet of them in cabinets if, for some reason, I don't like ones that are leaping to mind. So my challenge was to take one of them and give it a good hard run for 24 hours.
I didn't finish a paper, though I did get 2 good pages written. The writing was done in the first hour. (I'm a fast typist and have been thinking about this idea off and on for a few years.) Then in to the charge at data. Or, rather, the slow and careful sneaking up on data and hoping that it didn't bare its fangs and shred my idea in the first few seconds of contact.
After my 24 hours, the notion was still intact and, if anything, looking better. Didn't finish the paper, but no surprise there as actually there are quite a few papers to come from this idea. But I did make good progress on getting data and testing that the idea held up against some reasonably good counter-tests. More detail to come later, once I get a little farther. But things to be coming up here before then are the North Atlantic Oscillation, Arctic Oscillation, Pacific-North American, and Antarctic Oscillation (NAO, AO, PNA, and AAO, respectively).
24 September 2008
Atmospheric Lapse Rates
The question place is serving its purposes, one of which is to bring up points that warrant some discussion in a fuller post. At hand is the atmospheric lapse rate, which Bart brought up by way of his question:
There are models, somewhere, that assume anything one could mention, so I suppose there are some which assume the lapse rate. As you correctly notice, though, lapse rates depend on conditions, and those conditions vary over the globe. A serious climate model couldn't assume the lapse rate. And, in truth, they don't. More in a moment, but something to look back at is my description of the 16 climate models
Let's start with the lapse rate itself. What it is, is the change in temperature with elevation. Through the troposphere, the lapse rate is a negative number (cooling with elevation). In the stratosphere, it turns to zero and then positive (warming with elevation). In the mesosphere, we go back to cooling with elevation. This is a strictly observational issue. You can find temperature profiles, say from the Standard Atmosphere (a specific thing; Project: take a web look for it and see what they look like; they've changed through time, by the way). And then find the temperature difference between two levels, and divide by the elevation difference. That'll give you the average lapse rate. You can also find, radiosonde soundings of temperature. (I'd start my search for this project at the National Climatic Data Center.) This will let you see how the lapse rates vary day to day at a location, and between locations.
On the theoretical side, we go back to Conservation of Energy. We start with a completely dry (meaning no water vapor) blob of air, in an insulating bag that prevents it from radiating, conducting, or convecting energy to or from the surroundings. Then we lift it through the atmosphere. As we do so we'll find that its temperature drops. This happens because our blob does work in expanding. The energy for that work comes from its own thermal energy store. We can compute exactly how much the air would cool under this circumstance. It is about 10 K per km near the surface of the earth. This is what we are referring to in talking about the Dry Adiabatic Lapse Rate. The 'adiabatic' refers to our insulating bag around the air blob.
The polar regions, particularly the Antarctic plateau, are not bad approximations to that situation. But most of the atmosphere has fairly significant amounts of water vapor. We start, now, with a slightly different bag. It still prevents heat to be added or lost to the bag from outside. But now there's a second energy source inside the bag. Water vapor can condense, and when it does, it will release energy. We take the approximation that all the heat energy goes to the gases in the bag, and that the newly-formed liquid water is immediately moved outside the bag.
Now when we lift the bag, things go a bit differently. Let's start with air at 70% relative humidity, a typical global mean value. As we lift the air, it first acts 'dry', so cools at the about 10 K per km rate. But after a while, we have cooled to the point of being at 100% relative humidity. When we start to lift any further, water starts condensing and releasing heat. The condensation only happens if we're still cooling, so it can't reverse that tendency. But it can greatly slow the rate of cooling. This gives us a Moist Lapse Rate. Note that I dropped 'adiabatic' from the description. Since material is leaving the bag, it isn't an adiabatic process any more. It is pseudoadiabatic (a term you'll see) -- almost adiabatic, as the loss of mass isn't large. But not entirely adiabatic.
As a typical ballpark value, we take 6.5 K per km as the moist lapse rate. But this obviously will depend a lot on how much water was in the bag to begin with, and the temperature. If we start with a very warm, saturated, bag of air, then the lapse rate can be even lower than the 6.5 K per km. If we start, though, with a cold blob of air, even if it is saturated, we are still close to 10 K per km lapse rate. The thing is as we get colder, there's less water vapor present, which gives less condensation, then less heating. Consequently even in the tropics, the lapse rate heads towards the dry adiabatic value as you get high above the surface.
Whether moist or dry, the lapse rate computed this way is an idealization. In the real atmosphere, radiation does move energy around, and blobs of air do mix with each other (even when rising). Still, it's derived from a strong scientific principle (conservation of energy), and it turns out to give us good ideas (in reasonable accord with observation) about what the atmosphere should look like in the vertical.
For the modelling, let's think back to the 16 models. First, many of them are never used, so we'll ignore the longitude-primarily models. That leaves us with the 0 dimensional model I've already given an example of, and there's not even the opportunity to impose or even make use of a lapse rate in that. The 4 dimensional model definitely doesn't assume a lapse rate -- doing so would force violations of conservation of energy. Radiative-convective models can't force the lapse rate for the same reason. For a discussion of such models, to which I'll be returning in another post about water vapor's greenhouse contribution, see Ramanathan and Coakley, 1978. As of that era, one did specify a critical lapse rate. This isn't the lapse rate that the model had to have, rather, it was a limit. If the limit were violated, something has to happen. That something is to conserve energy by mixing the layers that violated the limit. And Energy Balance Models, as I expected, don't even mention lapse rate. See North, 1975 for a discussion of energy balance models.
Either the models are too simple to know about lapse rates (0 dimensional, Energy Balance), or they compute the lapse rate (Radiative Convective, 4 dimensional). Either way, the lapse rate is not assumed before hand. It's an interesting after the fact diagnostic for the Radiative Convective or 4d models, or impossible to speak to.
One thing to do is find some better sources for you to read. I taught an introductory (freshman level) physical geology class with Lutgens and Tarbuck, and liked the text there. They have a text at that level for meteorology, but I haven't read it myself. It should be good, though. John M. Wallace and Peter V. Hobbs, Atmospheric Science: An Introductory Survey is an excellent book. In half the chapters, comfort with multivariate calculus is assumed. But the other half are descriptive/physical rather than quantitative/mathematical so should be approachable already. A second edition is now out, I used the first. Does anyone have suggestions for a good freshman level introduction to meteorology/climate?
2) I read elsewhere (I can only research what I read, I don't really have the ability to check much of this for myself) that models assume a constant lapse rate. Chris said the lapse rate is required for the greenhouse effect, but from everything I look at people only catgorize in "Dry" or "Moist" cases, but doesn't it vary everywhere over the globe?
There are models, somewhere, that assume anything one could mention, so I suppose there are some which assume the lapse rate. As you correctly notice, though, lapse rates depend on conditions, and those conditions vary over the globe. A serious climate model couldn't assume the lapse rate. And, in truth, they don't. More in a moment, but something to look back at is my description of the 16 climate models
Let's start with the lapse rate itself. What it is, is the change in temperature with elevation. Through the troposphere, the lapse rate is a negative number (cooling with elevation). In the stratosphere, it turns to zero and then positive (warming with elevation). In the mesosphere, we go back to cooling with elevation. This is a strictly observational issue. You can find temperature profiles, say from the Standard Atmosphere (a specific thing; Project: take a web look for it and see what they look like; they've changed through time, by the way). And then find the temperature difference between two levels, and divide by the elevation difference. That'll give you the average lapse rate. You can also find, radiosonde soundings of temperature. (I'd start my search for this project at the National Climatic Data Center.) This will let you see how the lapse rates vary day to day at a location, and between locations.
On the theoretical side, we go back to Conservation of Energy. We start with a completely dry (meaning no water vapor) blob of air, in an insulating bag that prevents it from radiating, conducting, or convecting energy to or from the surroundings. Then we lift it through the atmosphere. As we do so we'll find that its temperature drops. This happens because our blob does work in expanding. The energy for that work comes from its own thermal energy store. We can compute exactly how much the air would cool under this circumstance. It is about 10 K per km near the surface of the earth. This is what we are referring to in talking about the Dry Adiabatic Lapse Rate. The 'adiabatic' refers to our insulating bag around the air blob.
The polar regions, particularly the Antarctic plateau, are not bad approximations to that situation. But most of the atmosphere has fairly significant amounts of water vapor. We start, now, with a slightly different bag. It still prevents heat to be added or lost to the bag from outside. But now there's a second energy source inside the bag. Water vapor can condense, and when it does, it will release energy. We take the approximation that all the heat energy goes to the gases in the bag, and that the newly-formed liquid water is immediately moved outside the bag.
Now when we lift the bag, things go a bit differently. Let's start with air at 70% relative humidity, a typical global mean value. As we lift the air, it first acts 'dry', so cools at the about 10 K per km rate. But after a while, we have cooled to the point of being at 100% relative humidity. When we start to lift any further, water starts condensing and releasing heat. The condensation only happens if we're still cooling, so it can't reverse that tendency. But it can greatly slow the rate of cooling. This gives us a Moist Lapse Rate. Note that I dropped 'adiabatic' from the description. Since material is leaving the bag, it isn't an adiabatic process any more. It is pseudoadiabatic (a term you'll see) -- almost adiabatic, as the loss of mass isn't large. But not entirely adiabatic.
As a typical ballpark value, we take 6.5 K per km as the moist lapse rate. But this obviously will depend a lot on how much water was in the bag to begin with, and the temperature. If we start with a very warm, saturated, bag of air, then the lapse rate can be even lower than the 6.5 K per km. If we start, though, with a cold blob of air, even if it is saturated, we are still close to 10 K per km lapse rate. The thing is as we get colder, there's less water vapor present, which gives less condensation, then less heating. Consequently even in the tropics, the lapse rate heads towards the dry adiabatic value as you get high above the surface.
Whether moist or dry, the lapse rate computed this way is an idealization. In the real atmosphere, radiation does move energy around, and blobs of air do mix with each other (even when rising). Still, it's derived from a strong scientific principle (conservation of energy), and it turns out to give us good ideas (in reasonable accord with observation) about what the atmosphere should look like in the vertical.
For the modelling, let's think back to the 16 models. First, many of them are never used, so we'll ignore the longitude-primarily models. That leaves us with the 0 dimensional model I've already given an example of, and there's not even the opportunity to impose or even make use of a lapse rate in that. The 4 dimensional model definitely doesn't assume a lapse rate -- doing so would force violations of conservation of energy. Radiative-convective models can't force the lapse rate for the same reason. For a discussion of such models, to which I'll be returning in another post about water vapor's greenhouse contribution, see Ramanathan and Coakley, 1978. As of that era, one did specify a critical lapse rate. This isn't the lapse rate that the model had to have, rather, it was a limit. If the limit were violated, something has to happen. That something is to conserve energy by mixing the layers that violated the limit. And Energy Balance Models, as I expected, don't even mention lapse rate. See North, 1975 for a discussion of energy balance models.
Either the models are too simple to know about lapse rates (0 dimensional, Energy Balance), or they compute the lapse rate (Radiative Convective, 4 dimensional). Either way, the lapse rate is not assumed before hand. It's an interesting after the fact diagnostic for the Radiative Convective or 4d models, or impossible to speak to.
One thing to do is find some better sources for you to read. I taught an introductory (freshman level) physical geology class with Lutgens and Tarbuck, and liked the text there. They have a text at that level for meteorology, but I haven't read it myself. It should be good, though. John M. Wallace and Peter V. Hobbs, Atmospheric Science: An Introductory Survey is an excellent book. In half the chapters, comfort with multivariate calculus is assumed. But the other half are descriptive/physical rather than quantitative/mathematical so should be approachable already. A second edition is now out, I used the first. Does anyone have suggestions for a good freshman level introduction to meteorology/climate?
21 September 2008
Excess Precision
Excessive precision is one of the first methods mentioned in How to Lie With Statistics. It's one that my wife (a nonscientist) had discovered herself. It's very common, which makes it a handy warning signal when reading suspect sources.
In joke form, it goes like this:
Psychology students were training rats to run mazes. In the final report, they noted "33.3333% of the rats learned to run the maze. 33.3333% of the rats failed to learn. And the third rat escaped."
If you didn't at least wince, here's why you should have. In reporting scientific numbers, one of the things you need to do is represent how good the numbers are. In order to talk about 33.3333% of the rats, you'd have to have a population of a million rats or more. 33.3333% is saying that the figure is not 33.3334% or 33.3332%. You only should be showing as much precision as you have data for. Even though your calculator will happily give you 6-12 digits, you should be representing how accurate your number is. In the case of the rat problem, if 1 more rat had been run, one of those 33% figures would change to 25 or 50. The changes of +17% or -8% are so large that they should not even have reported at the 1% level of precision. What the students should have done was just list the numbers, rather than percentages, of rats all along.
As a reader, a useful test is to look for how large the population is versus how many digits they report in percentages. Every digit in the percentage requires 10 times as large a population. Need 10 for the first digit (again, the psych. students shouldn't have reported percents), 100 for the second, and so on. A related question is 'how much would the percentages change with one more success/failure?' This is what I looked at with running the extra rat.
Related is to consider how precise the numbers involved were at the start. When I looked at that bogus petition, for instance, I reported 0.3 and 0.8%. Now the number of signers was given in 4 or 5 digits. That would permit quite a few more than the 1 I reported. The reason for only 1 is that I was dividing the number of signers by the size of the populations (2,000,000 and 800,000) -- and the population numbers looked like they'd been rounded heavily, down to only 1 digit of precision. When working with numbers of different precisions, the final answer can only have as many digits precision as the worst number in the entire chain.
An example, and maybe the single most commonly repeated one from climate, is this page, which gives (variously, but table 3 is the piece de resistance) the fraction of the greenhouse effect due to water vapor as 95.000% That's a lot of digits!
Let's take a look at the sources he gives, and then think a little about the situation to see whether 5 digits precision is reasonable. Well, the sources he has valid links for (1 of the 9 is broken, and one source doesn't have a link; I'll follow that up at lunch at work in a bit) certainly don't show much precision. Or being scientific, for that matter (news opinion pieces and the like). My favorite is the 21st century science and technology (a LaRouche publication), whose cover articles include "LaRouche on the Pagan Worship of Newton". The figures given are 96-99% (LaRouche mag), 'over 90%', 'about 95%', and the like. Not a single one gives a high precision 95.000%, or a high precision for any other figure. This should have been a red flag to the author, and certainly is to us readers. Whatever can be said about the fraction of greenhouse effect due to water vapor, it obviously can't be said with much precision. Not if you're being honest about it. (We'll come back in a later post to what can be said about water vapor, and it turns out that even the lowest of the figures is too high if you look at the science.)
Now for a bit of thinking on water vapor. The colder the atmosphere is, the less water vapor there can be before it starts to condense. (It's wrong to call it the atmosphere 'holding' the water vapor, but more in another post.) It also turns out to vary quite a lot depending on temperature. In wintertime here (0 C, 32 F being a typical temperature), the pressure of water vapor varies from about, say, 2 to 6 mb. In summer, it's more like 10 to 30. (30 million?! It gets very soggy here, though not as much as Tampa.) On a day that it's 30 mb here, it can be 10 mb a couple hundred km/miles to the west. Water vapor varies strongly through both time and space. As a plausibility test, then, it makes no sense for there to be 5 digits precision to the contribution of something that varies by over a factor of 10 in the course of a year, and even more than that from place to place on the planet.
In joke form, it goes like this:
Psychology students were training rats to run mazes. In the final report, they noted "33.3333% of the rats learned to run the maze. 33.3333% of the rats failed to learn. And the third rat escaped."
If you didn't at least wince, here's why you should have. In reporting scientific numbers, one of the things you need to do is represent how good the numbers are. In order to talk about 33.3333% of the rats, you'd have to have a population of a million rats or more. 33.3333% is saying that the figure is not 33.3334% or 33.3332%. You only should be showing as much precision as you have data for. Even though your calculator will happily give you 6-12 digits, you should be representing how accurate your number is. In the case of the rat problem, if 1 more rat had been run, one of those 33% figures would change to 25 or 50. The changes of +17% or -8% are so large that they should not even have reported at the 1% level of precision. What the students should have done was just list the numbers, rather than percentages, of rats all along.
As a reader, a useful test is to look for how large the population is versus how many digits they report in percentages. Every digit in the percentage requires 10 times as large a population. Need 10 for the first digit (again, the psych. students shouldn't have reported percents), 100 for the second, and so on. A related question is 'how much would the percentages change with one more success/failure?' This is what I looked at with running the extra rat.
Related is to consider how precise the numbers involved were at the start. When I looked at that bogus petition, for instance, I reported 0.3 and 0.8%. Now the number of signers was given in 4 or 5 digits. That would permit quite a few more than the 1 I reported. The reason for only 1 is that I was dividing the number of signers by the size of the populations (2,000,000 and 800,000) -- and the population numbers looked like they'd been rounded heavily, down to only 1 digit of precision. When working with numbers of different precisions, the final answer can only have as many digits precision as the worst number in the entire chain.
An example, and maybe the single most commonly repeated one from climate, is this page, which gives (variously, but table 3 is the piece de resistance) the fraction of the greenhouse effect due to water vapor as 95.000% That's a lot of digits!
Let's take a look at the sources he gives, and then think a little about the situation to see whether 5 digits precision is reasonable. Well, the sources he has valid links for (1 of the 9 is broken, and one source doesn't have a link; I'll follow that up at lunch at work in a bit) certainly don't show much precision. Or being scientific, for that matter (news opinion pieces and the like). My favorite is the 21st century science and technology (a LaRouche publication), whose cover articles include "LaRouche on the Pagan Worship of Newton". The figures given are 96-99% (LaRouche mag), 'over 90%', 'about 95%', and the like. Not a single one gives a high precision 95.000%, or a high precision for any other figure. This should have been a red flag to the author, and certainly is to us readers. Whatever can be said about the fraction of greenhouse effect due to water vapor, it obviously can't be said with much precision. Not if you're being honest about it. (We'll come back in a later post to what can be said about water vapor, and it turns out that even the lowest of the figures is too high if you look at the science.)
Now for a bit of thinking on water vapor. The colder the atmosphere is, the less water vapor there can be before it starts to condense. (It's wrong to call it the atmosphere 'holding' the water vapor, but more in another post.) It also turns out to vary quite a lot depending on temperature. In wintertime here (0 C, 32 F being a typical temperature), the pressure of water vapor varies from about, say, 2 to 6 mb. In summer, it's more like 10 to 30. (30 million?! It gets very soggy here, though not as much as Tampa.) On a day that it's 30 mb here, it can be 10 mb a couple hundred km/miles to the west. Water vapor varies strongly through both time and space. As a plausibility test, then, it makes no sense for there to be 5 digits precision to the contribution of something that varies by over a factor of 10 in the course of a year, and even more than that from place to place on the planet.
18 September 2008
1970s Mythology
One of the more popular myths repeated by those who don't want to deal with the science on climate is that 'in the 70s they were calling for an imminent ice age' and such like nonsense, where 'they' is supposedly the scientists in climate. This has long been known to be false to anyone who paid attention to the scientific publications from the time, or even to William Connolley's efforts in documenting what was actually in the literature over the last several years. Now, William and two other authors (he's actually the second author on the paper) have put that documentation into high profile peer-reviewed literature -- the Bulletin of the American Meteorological Society. For the briefer version, see William's comments over at Stoat and web links therein. That page also includes a link to the full paper in .pdf format.
16 September 2008
Sea Ice Packs
I've already mentioned types of sea ice, but that's only a bare scratch on the surface of the subject of sea ice. Another bit of vocabulary before diving in to today's sea ice: a chunk of sea ice is called a 'floe'. Not a flow, nor a sheet, a floe. Ice sheet is something quite different.
When we get a bunch of floes together, we start to have an ice pack. Three terms come up for describing a region of the ice pack (or maybe the entirety): concentration, area, and extent. Ice pack area makes the most intuitive sense -- add up the area of all ice floes, and that's the area of the ice pack. Concentration and extent are a little more removed. For concentration, draw a curve around some region you're interested in. Then divide the area of sea ice by the total area of the region bounded by your curve. Two common 'curves' used in the science are the footprint of a satellite sensor, and the area of a grid cell. The latter is what you'll see presented on any of the graphics at the sea ice sites I link to. For extent, you then take your grid and for every cell that has more than some concentration (which you'll specify), you add up the area of the entire grid cell. Extent will always be greater than area.
The usual concentration cutoff, and the one to assume if it isn't specified, is 15%. Below this, the ice is not reliably detected by the most commonly-used sensors, and it is a greatly smaller practical problem for ships. Not that ships appreciate bashing in to ice floes, but that at this concentration or lower, it can be manageable to move around them (and get out of the ice pack you were surprised by!).
The most common type of sensor to use for detecting sea ice from space uses passive microwaves. The ice (it turns out) emits microwave energy much more effectively than the ocean around it. This gives it a higher brightness temperature. Between that and some other details, we can get back an estimate of the concentration of sea ice that the satellite was looking at. A word, though, as we're coming out of summer: the method relies on the difference between ice and water. If you have ponds of water sitting on the ice floes, which can happen on thick ice such as the Arctic can have, then your concentration (area) estimate will be biased low. The extent is probably still not too bad. The reason is, by the time you're falling below 15% cover, the thick floes will have been storm-tossed enough that the ponds will have been emptied, or that it's late enough in the season that the melt pond melted its way through the ice floe and there really isn't any ice under the apparent water any more.
In looking at the NSIDC and Cryosphere Today pages on the Arctic melt, one thing to keep in mind is that one uses extent and the other uses area. Their numbers aren't directly comparable. They also differ in how they compute their estimates, in that one uses a longer averaging period than the other. The longer period gives you more confidence about the value (weather over the ice, or ocean, can give you false readings, but it moves pretty fast compared to the ice cover), but will miss some of the details in time.
More to come ... (bwahaha) But, in the mean time, questions you have about sea ice are welcome here.
When we get a bunch of floes together, we start to have an ice pack. Three terms come up for describing a region of the ice pack (or maybe the entirety): concentration, area, and extent. Ice pack area makes the most intuitive sense -- add up the area of all ice floes, and that's the area of the ice pack. Concentration and extent are a little more removed. For concentration, draw a curve around some region you're interested in. Then divide the area of sea ice by the total area of the region bounded by your curve. Two common 'curves' used in the science are the footprint of a satellite sensor, and the area of a grid cell. The latter is what you'll see presented on any of the graphics at the sea ice sites I link to. For extent, you then take your grid and for every cell that has more than some concentration (which you'll specify), you add up the area of the entire grid cell. Extent will always be greater than area.
The usual concentration cutoff, and the one to assume if it isn't specified, is 15%. Below this, the ice is not reliably detected by the most commonly-used sensors, and it is a greatly smaller practical problem for ships. Not that ships appreciate bashing in to ice floes, but that at this concentration or lower, it can be manageable to move around them (and get out of the ice pack you were surprised by!).
The most common type of sensor to use for detecting sea ice from space uses passive microwaves. The ice (it turns out) emits microwave energy much more effectively than the ocean around it. This gives it a higher brightness temperature. Between that and some other details, we can get back an estimate of the concentration of sea ice that the satellite was looking at. A word, though, as we're coming out of summer: the method relies on the difference between ice and water. If you have ponds of water sitting on the ice floes, which can happen on thick ice such as the Arctic can have, then your concentration (area) estimate will be biased low. The extent is probably still not too bad. The reason is, by the time you're falling below 15% cover, the thick floes will have been storm-tossed enough that the ponds will have been emptied, or that it's late enough in the season that the melt pond melted its way through the ice floe and there really isn't any ice under the apparent water any more.
In looking at the NSIDC and Cryosphere Today pages on the Arctic melt, one thing to keep in mind is that one uses extent and the other uses area. Their numbers aren't directly comparable. They also differ in how they compute their estimates, in that one uses a longer averaging period than the other. The longer period gives you more confidence about the value (weather over the ice, or ocean, can give you false readings, but it moves pretty fast compared to the ice cover), but will miss some of the details in time.
More to come ... (bwahaha) But, in the mean time, questions you have about sea ice are welcome here.
14 September 2008
The 16 Climate Models
The number of climate models, in the sense I'm using, has nothing to do with how many different groups are working on modelling climate. I'm sure the latter figure is much larger than 16. Instead, it is an expansion on my simplest climate model, and can give a sense of what lies down the road for our exploration of climate modelling.
The simplest climate model is the 0 dimensional model. We average over all of latitude, longitude, elevation, and time (or at least enough time). Those are the 4 dimensions we could have studied, or could get our answer in terms of. The 0 dimensional model gives us just a number -- a single temperature to describe everything in the climate system. We could expand, perhaps, to also getting a single wind, humidity, and a few other things. But it's distinctly lacking in terms of telling us everything we'd like to know. It fails to tell us why the surface averages 288 K, instead of the 255 K we see as the blackbody temperature. But it does get the blackbody temperature a start.
There is also only one 4 dimensional model -- where you include all 4 dimensions: latitude, longitude, elevation, and time. These are the full climate models, also called general circulation models (GCMs), atmosphere-ocean general circulation models (AOGCMs -- the original GCMs only let the atmosphere circulate), and a few other things. These are the most complex of the models.
But there are 14 more climate models possible: 4 one dimensional, 6 two dimensional, and 4 three dimensional.
In one dimension, we have the four which let 1 dimension vary, only:
Something quite close to the simplest model can be used for the time-only climate model. We would then let the earth-sun distance vary through the year, solar constant vary with the solar cycle, and albedo ... well, that would be a bit of a problem. As we've still averaged over all latitudes and longitudes, however, this model wouldn't tell us about why high latitudes are colder than low latitudes, or why land on the eastern side of oceans is warmer than land on the western side, or ... a lot. Still, it would take us another step of complexity down the road to understanding the climate system on global scale. This sort of model isn't used much professionally, but it can be a help
In elevation only, we'd (we hope) be able to look in to why the temperatures in the atmosphere do what they do -- falling as you rise through the troposphere and mesosphere, even or rising in the stratosphere. This class of models is known as the Radiative-Convective models (RCM). Namely, they include radiation and convection. The most famous early model of this sort is by Weatherald and Manabe, (1967?). We'll be coming back here.
In latitude only, we'll start being able to see why the poles are colder than the equator. Budyko and Sellers, separately but both in 1969, developed models like this. They're called energy balance models (EBM). They start with our simplest climate model, but applied to latitude belts on the earth. First you pretend that no energy enters or leaves the latitude belt except through the top of the atmosphere. Same thing as we said for the simplest model, except we applied it to the whole earth. You then compute the latitude belt's temperature, and discover that the tropics would be much warmer than they are, and the polar regions would be much colder. We're not surprised that we get the wrong answer here, but the degree of error then tells us by how much and where this 'no latitudinal energy transport' approximation is worst. You can then add the physics of 'heat flows from hot to cold' and get to work on how the climate in your model changes due to this fact.
The 4th one dimensional model, I've never seen anyone use -- a model in longitude only. This dimension is much quieter than the other two spatial dimensions. In the vertical, global average temperatures vary by something like 100 C in something like 10 km. 10 C/km; we'll get to exactly how much, where, and why, later. In latitude, temperatures vary from 30-40 C in low latitudes to -40 to -80 C in high latitudes (poles), so rounding again, about 100 C, but now across 10,000 km. About 0.01 C/km. In longitude, after we average over all year and all latitudes, ... there isn't much variation. As an eyeball matter, I'd be surprised if it were more than 10 C. (Project: Compute it. Let me know your result and sources. I may eventually do it myself.) This would be not more than 10 C, but still across 10,000 km or so, so something like 0.001 C/km at most (average absolute magnitude).
So our 4 models can be sequenced in terms of how much variation they get involved with, and, not coincidentally, it's something like the order of frequency I've seen the models in the literature:
The 6 two-dimensional models are:
In 3 dimensional modelling, we are back down to 4 models, as for 1 dimensional. This time, though, it's a matter of what we leave out:
And then we have kitchen sink, er, 4 dimensional, modelling.
A question I'll take up later is why we would run a simpler model (1d instead of 2d, 3d instead of 4d) if we could run the more complex model. Part of the answer will be that there's more than one way to be complex.
The simplest climate model is the 0 dimensional model. We average over all of latitude, longitude, elevation, and time (or at least enough time). Those are the 4 dimensions we could have studied, or could get our answer in terms of. The 0 dimensional model gives us just a number -- a single temperature to describe everything in the climate system. We could expand, perhaps, to also getting a single wind, humidity, and a few other things. But it's distinctly lacking in terms of telling us everything we'd like to know. It fails to tell us why the surface averages 288 K, instead of the 255 K we see as the blackbody temperature. But it does get the blackbody temperature a start.
There is also only one 4 dimensional model -- where you include all 4 dimensions: latitude, longitude, elevation, and time. These are the full climate models, also called general circulation models (GCMs), atmosphere-ocean general circulation models (AOGCMs -- the original GCMs only let the atmosphere circulate), and a few other things. These are the most complex of the models.
But there are 14 more climate models possible: 4 one dimensional, 6 two dimensional, and 4 three dimensional.
In one dimension, we have the four which let 1 dimension vary, only:
- time
- elevation
- latitude
- longitude
Something quite close to the simplest model can be used for the time-only climate model. We would then let the earth-sun distance vary through the year, solar constant vary with the solar cycle, and albedo ... well, that would be a bit of a problem. As we've still averaged over all latitudes and longitudes, however, this model wouldn't tell us about why high latitudes are colder than low latitudes, or why land on the eastern side of oceans is warmer than land on the western side, or ... a lot. Still, it would take us another step of complexity down the road to understanding the climate system on global scale. This sort of model isn't used much professionally, but it can be a help
In elevation only, we'd (we hope) be able to look in to why the temperatures in the atmosphere do what they do -- falling as you rise through the troposphere and mesosphere, even or rising in the stratosphere. This class of models is known as the Radiative-Convective models (RCM). Namely, they include radiation and convection. The most famous early model of this sort is by Weatherald and Manabe, (1967?). We'll be coming back here.
In latitude only, we'll start being able to see why the poles are colder than the equator. Budyko and Sellers, separately but both in 1969, developed models like this. They're called energy balance models (EBM). They start with our simplest climate model, but applied to latitude belts on the earth. First you pretend that no energy enters or leaves the latitude belt except through the top of the atmosphere. Same thing as we said for the simplest model, except we applied it to the whole earth. You then compute the latitude belt's temperature, and discover that the tropics would be much warmer than they are, and the polar regions would be much colder. We're not surprised that we get the wrong answer here, but the degree of error then tells us by how much and where this 'no latitudinal energy transport' approximation is worst. You can then add the physics of 'heat flows from hot to cold' and get to work on how the climate in your model changes due to this fact.
The 4th one dimensional model, I've never seen anyone use -- a model in longitude only. This dimension is much quieter than the other two spatial dimensions. In the vertical, global average temperatures vary by something like 100 C in something like 10 km. 10 C/km; we'll get to exactly how much, where, and why, later. In latitude, temperatures vary from 30-40 C in low latitudes to -40 to -80 C in high latitudes (poles), so rounding again, about 100 C, but now across 10,000 km. About 0.01 C/km. In longitude, after we average over all year and all latitudes, ... there isn't much variation. As an eyeball matter, I'd be surprised if it were more than 10 C. (Project: Compute it. Let me know your result and sources. I may eventually do it myself.) This would be not more than 10 C, but still across 10,000 km or so, so something like 0.001 C/km at most (average absolute magnitude).
So our 4 models can be sequenced in terms of how much variation they get involved with, and, not coincidentally, it's something like the order of frequency I've seen the models in the literature:
- Elevation -- Radiative-Convective Models (RCM) -- 10 C/km, 100+ C range
- Latitude -- Energy Balance Models (EBM) -- 0.01 C/km, about 100 C range
- Time -- (not common enough to have a name I know of) -- a few C range, seasonally
- Longitude -- (never used that I know of) -- 0.001 C/km or less, a few C range
The 6 two-dimensional models are:
- time-elevation (an expanded Radiative-Convective Model)
- time-latitude (an expanded Energy Balance Model)
- time-longitude (I've never seen done as a model, but Hovmo"ller diagrams do this in data analysis)
- elevation-latitude (a cross between Radiative-Convective and Energy Balance)
- elevation-longitude (I've never seen as a model, but it's not unheard of for data analysis)
- latitude-longitude (I've never seen as a model, but common for data analysis)
In 3 dimensional modelling, we are back down to 4 models, as for 1 dimensional. This time, though, it's a matter of what we leave out:
- time (keep latitude, longitude, elevation; not common for models)
- longitude (keep time, latitude, elevation -> the straight combination of RCM and EBM; most common of the 3D models)
- latitude (keep time, longitude, elevation)
- elevation (keep time, latitude, longitude)
And then we have kitchen sink, er, 4 dimensional, modelling.
A question I'll take up later is why we would run a simpler model (1d instead of 2d, 3d instead of 4d) if we could run the more complex model. Part of the answer will be that there's more than one way to be complex.
Subscribe to:
Posts (Atom)