It's a little premature to make a detailed assessment of the predictions for September's average extent as the final numbers aren't in. They will be soon, but my focus is actually over on the question of how to go about doing the comparisons. Earlier, I talked about testing ideas, but there, the concern was more one of how to find something that you could meaningfully test. Here, with the September's average extent, we already have a well-defined, meaningful thing to look at.
Our concern now is to decide how to compare the observed September average extent with the climatological extent, and a prediction. While mine wasn't the best guess in the June summary at ARCUS, it was mine, so I know what the author had in mind.
Let's say that the true number will be 5.25 million km^2. My prediction was 4.92. The useless approach is to look at the two figures, see that they're different, and declare that my prediction was worthless. Now it might be, but you don't know that from just the fact that the prediction and the observation were different. Another part of my prediction was to note that the standard deviation to the prediction was 0.47 million km^2. That is a measure of the 'weather' involved in sea ice extents -- the September average extent has that much variation just because weather happens. Consequently, even if I were absolutely correct -- about the mean (most likely value) and the standard deviation, I'd expect my prediction to be 'wrong' most of the time. 'Wrong' in that useless sense that the observation differed by some observable amount from my prediction. The more useful approach is to allow for the fact that the predicted value really represents a distribution of possibilities -- while 4.92 is the most likely value from my prediction, 5.25 is still quite possible.
We also like to have a 'null forecaster' to compare with. The 'null forecaster' is a particularly simple forecaster, one with no brains to speak of, and very little memory. You always want your prediction to do better than the null forecaster. Otherwise, people could do as well or better with far less effort than you're putting in. The first 'null forecaster' we reach to is climatology -- predict that things will be the way they 'usually' are. Lately, for sea ice, we've been seeing figures which are wildly different from any earlier observations, so we have to do more to decide what we mean by 'climatology' for sea ice. I noticed that the 50's, 60's, and 70's up to the start of the satellite era had as much or somewhat more ice than the early part of the satellite era (see Chapman and Walsh's data set at the NSIDC). My 'climatological' value for the purpose of making my prediction was 7.38 million km^2, the average of about the first 15 years of the satellite era. A 30 year average including the last 15 years of the pre-satellite era would be about that or a little higher. Again, that figure is part of a distribution, since even before the recent trend, there were years with more or less (than climatology) ice covers.
It may be a surprise, but we also should consider the natural variability in looking at the observed value for the month. Since we're really looking towards climate, we have in mind that if the weather this summer were warmer, there'd be less September ice. And if it were colder, or different wind patterns, there would have been more ice this September. Again, the spread is the 0.47 (at least that's my estimate for the figure).
I'll make the assumption (because otherwise we don't know what to do) that the ranges form a nice bell curve, also known as 'normal distribution', also known as 'Gaussian distribution'. We can then plot each distribution -- from the observed, the prediction, and what climatology might say. They're in the figure:
This is one that makes a lot of sense immediately from the graphic. The Observed and Prediction curves overlap each other substantially, while the curves for Observed and Climatology are so far from each other that there's only the tiniest overlap (near 6.4). That tiny overlap occurs for an area where the curves are extremely low -- meaning that neither the observation nor the climatology is likely to produce a value near 6.4, and it gets worse if (as happened) what you saw was 5.25.
The comparison of predictions gets harder if the predictions have different standard deviations. I could, for instance, have decided that although the natural variability was .47, I was not confident about my prediction method, so taken twice as large a variability (for my prediction -- the natural variability for the observation and for the climatology is what it is and not subject to change by me). Obviously, that prediction would be worse than the one I made. Or at least it would be given the observed amount. If we'd really seen 4.25 instead of 5.25, I would have been better off with a less narrow prediction -- the curve would be flatter, but lower. I'll leave that more complicated situation for a later note.
For now, though, we can look at the people who said that the sea ice pack had 'recovered' (which would mean 'got back to climatology') and see that they were horribly wrong. Far more so than any of the serious predictions in the sea ice outlook (June report, I confess I haven't read all of the later reports). The 'sea ice has recovered' folks are as wrong as a prediction of 3.1 million km^2 would have been. Lowest June prediction by far was a 3.2, but the authors noted that it was an 'aggressive' prediction -- they'd skewed everything towards making the model come up with a low number. Their 'moderate' prediction was for a little over 4.7. Shift my yellow triangle curve 0.2 to the left and you have what theirs looks like -- still pretty close.
To go back to my prediction, it was far better than the null forecaster (climatology), so not 'worthless'. Or at least not by that measure. If the variability were small, however, then the curves would have narrow spikes. If the variability were 0.047, ten times smaller than it is, the curves would be near zero once you were more than a couple tenths away from the prediction. Then the distribution for my prediction would show almost no overlap with the observation and its distribution. That would be, if not worthless (at least it was closer than climatology), at least hard to consider having done much good.
Pages
▼
30 September 2009
17 September 2009
Sea Ice Bet Status
It looks like the Arctic sea ice extenthas bottomed out, as of the 12th or so. I'm confident that a good storm system could give us a new minimum -- both by slamming up the loose ice in the western Arctic (reducing extent by pushing the ice pack together) and by mixing up warmer water from the ocean (reducing extent by melting the ice). But, as a rule, this sort of thing is rare. A storm would have to hit the right area in the next few days. Otherwise the atmosphere will be cold enough to simply keep freezing new ice.
So, starting to be time to assess our various guesses. William Connolley and I made our 50 quatloo wager on over (his side) or under 5.38 million km^2 for the September average. The minimum, if we have indeed seen the minimum, is about 5.1 million. That looks favorable for my side of the bet. Though it would mean I definitely missed the September average, as I said that would be 4.92. More about that in a moment. The figure below suggests that since the extent dropped below 5.38 right about the start of September, I should be safe. Usually (see the climatological curve) the pack doesn't gain much area in September. But William could still win if we have an unusual last two weeks and the ice pack gains a lot of extent.
I've added a few lines to the NSIDC graphic of 12 September. One is the vertical line, to highlight when it is we dropped blow the climatological minimum. We've been below normal since early August. That in itself suggests a climate change. We're now about 3 standard deviations below the climatological minimum, which again, in such a short record, suggests a climate change. The significance of the extra large amount of ocean being exposed to the atmosphere, for an extra long time, is that it lets more ocean absorb more heat from the sun. Though this year looks to be a higher extent than 2007 and 2008, it's still below any year except 2007 and 2008. If we didn't know about those two years, we'd be surprised by this year being so low -- the 2005 September average extent (record before 2007) was 5.57 million km^2 -- far higher than this year is liable to average.
Still early to decide whether I owe William, or vice versa. Both of us will win our bets with Alastair. Looking down to the poll that I invited you to answer back in June, I'll say that the people who called for 7.5 million (the previous climatology) and 6.0 million km^2 are wrong. Also the 1 who went for 3, the 2 who went for 3.5, and the 4 who went for 4 million km^2 for the month's average. The 12 who went for 4.5 (which means anything in the range 4.25 to 4.75) should be pulling for a really massive storm to hit the western Arctic and obliterate huge amounts of ice extent. The main candidates are the 3 who went for 5, and the 1 who went for 5.5 (ranges of 4.75 to 5.25, and 5.25 to 5.75, respectively).
Something else this brings up (or at least this plus some comments I saw at a different site) is "How do you judge the quality of predictions?" I'll be coming back to this, using the Sea Ice Outlook estimates for my illustrations.
So, starting to be time to assess our various guesses. William Connolley and I made our 50 quatloo wager on over (his side) or under 5.38 million km^2 for the September average. The minimum, if we have indeed seen the minimum, is about 5.1 million. That looks favorable for my side of the bet. Though it would mean I definitely missed the September average, as I said that would be 4.92. More about that in a moment. The figure below suggests that since the extent dropped below 5.38 right about the start of September, I should be safe. Usually (see the climatological curve) the pack doesn't gain much area in September. But William could still win if we have an unusual last two weeks and the ice pack gains a lot of extent.
I've added a few lines to the NSIDC graphic of 12 September. One is the vertical line, to highlight when it is we dropped blow the climatological minimum. We've been below normal since early August. That in itself suggests a climate change. We're now about 3 standard deviations below the climatological minimum, which again, in such a short record, suggests a climate change. The significance of the extra large amount of ocean being exposed to the atmosphere, for an extra long time, is that it lets more ocean absorb more heat from the sun. Though this year looks to be a higher extent than 2007 and 2008, it's still below any year except 2007 and 2008. If we didn't know about those two years, we'd be surprised by this year being so low -- the 2005 September average extent (record before 2007) was 5.57 million km^2 -- far higher than this year is liable to average.
Still early to decide whether I owe William, or vice versa. Both of us will win our bets with Alastair. Looking down to the poll that I invited you to answer back in June, I'll say that the people who called for 7.5 million (the previous climatology) and 6.0 million km^2 are wrong. Also the 1 who went for 3, the 2 who went for 3.5, and the 4 who went for 4 million km^2 for the month's average. The 12 who went for 4.5 (which means anything in the range 4.25 to 4.75) should be pulling for a really massive storm to hit the western Arctic and obliterate huge amounts of ice extent. The main candidates are the 3 who went for 5, and the 1 who went for 5.5 (ranges of 4.75 to 5.25, and 5.25 to 5.75, respectively).
Something else this brings up (or at least this plus some comments I saw at a different site) is "How do you judge the quality of predictions?" I'll be coming back to this, using the Sea Ice Outlook estimates for my illustrations.
16 September 2009
Title decoding
Title of a recent paper in Science: Motile Cilia of Human Airway Epithelia are Chemosensory (Shah and others, vol 325, pp. 1131-1134, 2009.)
Time to apply the Science Jabberwocky approach, as I'm unfamiliar with many of those terms:
Mimsy borogoves of Human Airway Bandersnatches are Frumious.
(motile) (cilia) of Human Airway (epithelia) are (chemosensory)
4 terms we need to get definition of (those of we who don't already know them, that is).
Cilia, whatever they are, can apparently be motile or non-motile. By the writing, that doesn't seem to be the new observation. But that they can be frumious, er, chemosensory, is apparently news.
The abstract itself tells us what the cilia are -- microscopic projections that extend from eukaryotic cells. (If we know what a eukaryotic cell is, we're set. Otherwise, we have to do a little more research, and discover that eukaryotic cells are those with separate parts to them, including a nucleus -- that covers all animals, plants, and fungi).
We also have to go look up 'epithelial'. We're ahead of the game if we know that epi- tends to have something to do with 'on the surface'. Epithelial cells are those that are on the surface of our body cavities -- lungs, digestive system, etc..
Chemosensory ... well, sensory is nicely obvious. Chemo- as a prefix means that the cells are sensing chemicals.
So with a little decoding work, and perhaps using google search for definitions (enter define:epithelial as your search and you'll get links to the definition of epithelial), we arrive at our understanding of the title. -- There are cells lining the surface of our airways that have little extensions. The authors show that the extensions are sensitive to chemicals.
In reading the paper itself, we find that it is particular kinds of chemicals that these cilia are sensitive to -- 'bitter'. When they detect such compounds in the air, they start getting active and try to flush out the bad stuff they've detected.
The conclusion is not especially a surprise to me. I've long been confident that my airways were sensitive to certain chemicals (though I didn't know which). Walking past a perfume counter has always been a problem for me, as my lungs shut down or at least try to. Folks have said that it's just my imagination, and all that is happening that I'm smelling the perfume and causing the rest. That doesn't work well as a hypothesis because I have an exceptionally bad sense of smell. Usually the way I know the perfume is present is because I start having more difficulty breathing. The paper also corresponds to a different experience of mine. Namely, I don't have such reactions to flowers, even flowers in large masses as we get in spring with the honeysuckle, or lilac bush. The cilia are reactive to bitter compounds (known from the paper) and probably (a point that's very testable) perfumes have more such compounds than flowers do.
Per my usual, I've written the corresponding author about this post. Also, if the sample donation process is quick, easy, painless, harmless, I'm willing to donate a sample of my highly-reactive (I think) epithelial cells for their further research.
Time to apply the Science Jabberwocky approach, as I'm unfamiliar with many of those terms:
Mimsy borogoves of Human Airway Bandersnatches are Frumious.
(motile) (cilia) of Human Airway (epithelia) are (chemosensory)
4 terms we need to get definition of (those of we who don't already know them, that is).
Cilia, whatever they are, can apparently be motile or non-motile. By the writing, that doesn't seem to be the new observation. But that they can be frumious, er, chemosensory, is apparently news.
The abstract itself tells us what the cilia are -- microscopic projections that extend from eukaryotic cells. (If we know what a eukaryotic cell is, we're set. Otherwise, we have to do a little more research, and discover that eukaryotic cells are those with separate parts to them, including a nucleus -- that covers all animals, plants, and fungi).
We also have to go look up 'epithelial'. We're ahead of the game if we know that epi- tends to have something to do with 'on the surface'. Epithelial cells are those that are on the surface of our body cavities -- lungs, digestive system, etc..
Chemosensory ... well, sensory is nicely obvious. Chemo- as a prefix means that the cells are sensing chemicals.
So with a little decoding work, and perhaps using google search for definitions (enter define:epithelial as your search and you'll get links to the definition of epithelial), we arrive at our understanding of the title. -- There are cells lining the surface of our airways that have little extensions. The authors show that the extensions are sensitive to chemicals.
In reading the paper itself, we find that it is particular kinds of chemicals that these cilia are sensitive to -- 'bitter'. When they detect such compounds in the air, they start getting active and try to flush out the bad stuff they've detected.
The conclusion is not especially a surprise to me. I've long been confident that my airways were sensitive to certain chemicals (though I didn't know which). Walking past a perfume counter has always been a problem for me, as my lungs shut down or at least try to. Folks have said that it's just my imagination, and all that is happening that I'm smelling the perfume and causing the rest. That doesn't work well as a hypothesis because I have an exceptionally bad sense of smell. Usually the way I know the perfume is present is because I start having more difficulty breathing. The paper also corresponds to a different experience of mine. Namely, I don't have such reactions to flowers, even flowers in large masses as we get in spring with the honeysuckle, or lilac bush. The cilia are reactive to bitter compounds (known from the paper) and probably (a point that's very testable) perfumes have more such compounds than flowers do.
Per my usual, I've written the corresponding author about this post. Also, if the sample donation process is quick, easy, painless, harmless, I'm willing to donate a sample of my highly-reactive (I think) epithelial cells for their further research.
15 September 2009
Good science, wrong answer
Sometimes it happens that somebody does good science, but has arrived at a wrong answer. Since most of us think that the answer has to be right (and I'll agree that it's better when it is), this will take some explaining. Let's go back to what science is about -- trying to understand the universe in ways that can be shared. Good science, then, is something that leads to us understanding more about the universe.
For my illustration, I'll go back to something now less controversial than climate. In the 1980s, paleontologists David Raup and J. J. Sepkoski advanced the idea that mass extinctions, such as the one that clobbered the dinosaurs, were periodic. Approximately every 26 million years, for the last about 250 million years, they observed a spike in the extinction rate. Not all were as large as the one that got the dinosaurs.
It so happened that they were at the University of Chicago, in the Department of Geophysical Sciences, and so was I. Further, I was working with time series for my master's thesis. So I asked them about working with their data and seeing what I would find with my very different approach. They were gracious and spent some time explaining what I was looking at, knowing that I didn't think they were right. By my approach, indeed, their idea did not stand. My approach, however, was not a strong one, being susceptible to some important errors. So I never published about it. Still, along the way, I learned more both about time series, and about paleontological data. So that's a plus making the periodic extinction idea 'good science' -- I, at least, learned more about the universe, even if not enough to make original contribution.
One mark of good science is that it prompts further research. Raup and Sepkoski, in their original paper, had made a reasonable case. 'reasonable' being that it could not be shot down by any simple means. 'simple' meaning that the answer was already in the scientific literature. So to knock down the case, itself a normal process in science, the critics had to do some research to show how weaknesses or errors in one or more of the following lead to the erroneous conclusion:
* The statistical methods
* The geological time scale
* The paleontological data (extinction figures, and their dating)
The idea was not sensitive to the geological time scale used, so that fell away fairly quickly. The statistical methods did develop a longer-lasting discussion -- new ones devleoped, flaws in the new and the old methods described (and then discussion about whether the claimed flaws were real).
Most interesting to me, and I think where the greatest good for the science was, was going back to the data. In saying that the extinctions were periodic, one carried the image of something crashing into the earth (like the meteor that did in the dinosaurs) and killing off huge numbers of species (and genera, and families) very quickly. One of the data problems, then, was getting accurate dates for the time of extinction. Often the data could only say that the things went extinct sometime within a several million year window. That's a problem, as then your view of whether it was periodic could depend on whether you put the date of extinction at one end of the geological period or another. So people went to work on getting better dates for when the species went extinct.
Also, I noted above that the original idea applied to the last 250 million years. The reason was, when they started that was as far back as you could go with reasonable data. So work also went in to trying to push back the period of reasonable data.
I don't know what the field ultimately concluded about the idea. I do know that the work to advance or refute the idea resulted in more data about when species went extinct, and better dates for when they did. Further, those newer and better data are themselves useful for learning more about the universe -- there's more to be gained than just answering the original question about whether mass extinctions were periodic.
So, not only did the original publication result in more being learned about the universe, but it was in a way that enables even more learning to happen. That makes it good science. The original idea might have been wrong, but it definitely was good science.
I've focused on the side of scientific merit here. There was a lot of, well, unprofessional, response as well. You can read about both parts in The Nemesis Affair: A Story of the Death of Dinosaurs and the Ways of Science by David Raup. Part of it was because the idea that any mass extinction had to do with things crashing in to the earth was still new, and still widely not accepted. Then this idea comes up and says that not only does it happen (bad enough) but it had happened many times, and happens regularly.
For my illustration, I'll go back to something now less controversial than climate. In the 1980s, paleontologists David Raup and J. J. Sepkoski advanced the idea that mass extinctions, such as the one that clobbered the dinosaurs, were periodic. Approximately every 26 million years, for the last about 250 million years, they observed a spike in the extinction rate. Not all were as large as the one that got the dinosaurs.
It so happened that they were at the University of Chicago, in the Department of Geophysical Sciences, and so was I. Further, I was working with time series for my master's thesis. So I asked them about working with their data and seeing what I would find with my very different approach. They were gracious and spent some time explaining what I was looking at, knowing that I didn't think they were right. By my approach, indeed, their idea did not stand. My approach, however, was not a strong one, being susceptible to some important errors. So I never published about it. Still, along the way, I learned more both about time series, and about paleontological data. So that's a plus making the periodic extinction idea 'good science' -- I, at least, learned more about the universe, even if not enough to make original contribution.
One mark of good science is that it prompts further research. Raup and Sepkoski, in their original paper, had made a reasonable case. 'reasonable' being that it could not be shot down by any simple means. 'simple' meaning that the answer was already in the scientific literature. So to knock down the case, itself a normal process in science, the critics had to do some research to show how weaknesses or errors in one or more of the following lead to the erroneous conclusion:
* The statistical methods
* The geological time scale
* The paleontological data (extinction figures, and their dating)
The idea was not sensitive to the geological time scale used, so that fell away fairly quickly. The statistical methods did develop a longer-lasting discussion -- new ones devleoped, flaws in the new and the old methods described (and then discussion about whether the claimed flaws were real).
Most interesting to me, and I think where the greatest good for the science was, was going back to the data. In saying that the extinctions were periodic, one carried the image of something crashing into the earth (like the meteor that did in the dinosaurs) and killing off huge numbers of species (and genera, and families) very quickly. One of the data problems, then, was getting accurate dates for the time of extinction. Often the data could only say that the things went extinct sometime within a several million year window. That's a problem, as then your view of whether it was periodic could depend on whether you put the date of extinction at one end of the geological period or another. So people went to work on getting better dates for when the species went extinct.
Also, I noted above that the original idea applied to the last 250 million years. The reason was, when they started that was as far back as you could go with reasonable data. So work also went in to trying to push back the period of reasonable data.
I don't know what the field ultimately concluded about the idea. I do know that the work to advance or refute the idea resulted in more data about when species went extinct, and better dates for when they did. Further, those newer and better data are themselves useful for learning more about the universe -- there's more to be gained than just answering the original question about whether mass extinctions were periodic.
So, not only did the original publication result in more being learned about the universe, but it was in a way that enables even more learning to happen. That makes it good science. The original idea might have been wrong, but it definitely was good science.
I've focused on the side of scientific merit here. There was a lot of, well, unprofessional, response as well. You can read about both parts in The Nemesis Affair: A Story of the Death of Dinosaurs and the Ways of Science by David Raup. Part of it was because the idea that any mass extinction had to do with things crashing in to the earth was still new, and still widely not accepted. Then this idea comes up and says that not only does it happen (bad enough) but it had happened many times, and happens regularly.
14 September 2009
An Intro to Peer Review
I didn't really mean to present an object lesson in why peer review is a good thing. But, having done so, it seems a good time to use it to illustrate what the process looks like.
First step is, somebody has to put something forward for consideration. In this case, my note on field relevance last week. One important aspect of this is, the 'something' has to be said concretely enough that people can point to the mistakes you've made.
The second step is that the comments (reviews) have to point to specific things that are wrong. Ranting about leftists (happened elsewhere) doesn't count. Saying that I grossly understated the relevance of biologists because -- and give reasons for that 'because' -- does.
The third step is for the original author to revise the article in response to the reviewer comments. That doesn't necessarily mean 'do what every reviewer wants', not least because the reviewers (c.f. gmcrews and John Mashey) may disagree. But there should be at least some response, if only to add some explanation in the article that addresses why you're not doing (what reviewer X wanted). I'll be doing that later, but am waiting for words from the biology folks about how the field applies to deciding whether and how much of recent climate change is due to human activity.
To summarize the comments some here (do read the originals if you haven't already):
Many fields are missing
Many fields are placed too high or too low (mostly too low)
I conflated two different questions -- whether and how much warming there has been, with whether and how much of it has been from human activity (Some irony there, as one of the things I did say was that the picture changes depending on what exactly the question is.)
Irrespective of whether the previous points were addressed, the approach itself is not useful
Each of this is a common general sort of comment to see in a peer review. To rephrase it more generally:
Incomplete
Inaccurate
Question is not specific enough
Question is not interesting, approach is not useful
In terms of my rewriting process, the first two are pretty easy to deal with. Many people made many good comments. Those can be incorporated fairly straightforwardly, along with the fields that the comments prompted me to remember even if they weren't directly mentioned.
The second two, however, aren't quite so obvious. The third is taken care of if I go clearly to making the question addressed "How much of the recent warming is due to human activity?" And that is what the graphic actually tried to address (though still with some issues with respect to the first two sorts of comment).
But, is it useful to address that question in this way? My thought was that for non-experts, it could be a useful guide when encountering, say, a 'conference' whose speakers were almost entirely from the lower ranges. On the other hand those antiscientific conferences are seldom so specific about what they're addressing. Either the figure is focussed on too narrow a question, or many separate such figures would be needed. Experts, or at least folks at, say, K6 and above in Mashey's scale, should just go read the original materials to decide.
I haven't decided which way to go on this. Comments, as always, welcome. I also realized that it's a long time since I wrote up my comment policy, and link policy, so they are now linked to from the upper right (in the 'welcome' section).
In the mean time, I'm taking down my version of the figure and asking those who have copied it to remove it as well.
But, to come back to peer review:
All this illustrates why it is you want to read peer-reviewed sources for your science. Nobody knows everything, so papers can otherwise be incomplete, inaccurate, etc.. People can also think that something is obvious, but have forgotten about things that they themselves do know (like my temporary brain death about biology as a field for knowing that climate is changing). Or they know certain things so well themselves that they don't write it up well for the more general audience. (Even in a professional journal, most of the readers aren't in your particular sub-sub-sub-field. 'more general' may only mean make it accessible in the sub-sub-field instead, but that can still be a challenge.) In a productive peer review process, these questions are all addressed.
First step is, somebody has to put something forward for consideration. In this case, my note on field relevance last week. One important aspect of this is, the 'something' has to be said concretely enough that people can point to the mistakes you've made.
The second step is that the comments (reviews) have to point to specific things that are wrong. Ranting about leftists (happened elsewhere) doesn't count. Saying that I grossly understated the relevance of biologists because -- and give reasons for that 'because' -- does.
The third step is for the original author to revise the article in response to the reviewer comments. That doesn't necessarily mean 'do what every reviewer wants', not least because the reviewers (c.f. gmcrews and John Mashey) may disagree. But there should be at least some response, if only to add some explanation in the article that addresses why you're not doing (what reviewer X wanted). I'll be doing that later, but am waiting for words from the biology folks about how the field applies to deciding whether and how much of recent climate change is due to human activity.
To summarize the comments some here (do read the originals if you haven't already):
Each of this is a common general sort of comment to see in a peer review. To rephrase it more generally:
In terms of my rewriting process, the first two are pretty easy to deal with. Many people made many good comments. Those can be incorporated fairly straightforwardly, along with the fields that the comments prompted me to remember even if they weren't directly mentioned.
The second two, however, aren't quite so obvious. The third is taken care of if I go clearly to making the question addressed "How much of the recent warming is due to human activity?" And that is what the graphic actually tried to address (though still with some issues with respect to the first two sorts of comment).
But, is it useful to address that question in this way? My thought was that for non-experts, it could be a useful guide when encountering, say, a 'conference' whose speakers were almost entirely from the lower ranges. On the other hand those antiscientific conferences are seldom so specific about what they're addressing. Either the figure is focussed on too narrow a question, or many separate such figures would be needed. Experts, or at least folks at, say, K6 and above in Mashey's scale, should just go read the original materials to decide.
I haven't decided which way to go on this. Comments, as always, welcome. I also realized that it's a long time since I wrote up my comment policy, and link policy, so they are now linked to from the upper right (in the 'welcome' section).
In the mean time, I'm taking down my version of the figure and asking those who have copied it to remove it as well.
But, to come back to peer review:
All this illustrates why it is you want to read peer-reviewed sources for your science. Nobody knows everything, so papers can otherwise be incomplete, inaccurate, etc.. People can also think that something is obvious, but have forgotten about things that they themselves do know (like my temporary brain death about biology as a field for knowing that climate is changing). Or they know certain things so well themselves that they don't write it up well for the more general audience. (Even in a professional journal, most of the readers aren't in your particular sub-sub-sub-field. 'more general' may only mean make it accessible in the sub-sub-field instead, but that can still be a challenge.) In a productive peer review process, these questions are all addressed.
10 September 2009
Climate and Computer Science
I'll pick up John Mashey's comment from the 'relevance' thread, as it illustrates in another way some of what I mean regarding relevance, and about who might know what. He wrote:
As a group, computer scientists are properly placed in the last tier.
Once upon a time, computer scientists often had early backgrounds in natural sciences, before shifting to CMPSC, especially when there were few undergraduate CMPSC degree programs.
This is less true these days, and people so inclined can get through CMPSC degrees with less physics, math, and statistics than one would expect.
Many computer scientists would fit B3 background, K2-K3 level of knowledge on that chart I linked earlier.
On that scale, I only rate myself a K4, which corresponds roughly to Robert's Tier 5. Many CMPSC PhDs would rate no higher than K2 (or even K1, I'm afraid, on climate science).
Of course John is one who has been spending serious effort at learning the science, so although our shortcut puts him on a low tier in this area (he's high for computer science!), the earned knowledge is higher. Best, of course, is to work from the actual knowledge of the individual. On the other hand, presented a list of 60 speakers at a meeting, and seeing few from fields in the upper levels (applicable to the topic at hand), it's not a bad bet that the meeting isn't really about the science (or whatever expertise is involved).
If we're talking specifically about climate modellers, we're talking about people who use computers a lot, and make the computers run for very long periods. So, does that mean that all climate modellers are experts about computers the way that computer scientists are? Absolutely not. Again, different matters. Some climate modellers, particularly those from the early days, are quite knowledgeable about gruesome details of computer science. But, as with computer scientists and climate models, that's not the way to bet.
I'll link again to John's K-scale. A computer scientist spends most time learning about computer science. At low levels, this means things like learning programming languages, how to write simple algorithms, and the like. Move up, and a computer scientist will be learning how to write the programs that turn a program in to something the computer can actually work with (compilers), how to write the system that keeps the computer doing all the sorts of processing you want it to (operating systems), interesting (to computer scientists, at least :-) things about data structures, data bases, syntactic analysis (how to invent programming languages, among other things), abstract algorithms, and ... well probably quite a few more things. It's a long time since I was an undergraduate rooming with the teaching assistant for the operating systems class. Things have changed, I'm sure.
Anyhow, on that scale of computer science knowledge, I probably sit in the K2-K3 level. I use computers a lot. And, on the scale of things in my field, am pretty good with the computer science end of things. But, considered as matters of computer science, things like numerical weather prediction models, ice sheet models, ocean models, climate models, etc., are just not that involved. The inputs take predictable paths through the program (clouds don't get to change their mind about how they behave, unlike what happens when you're making the computer work hard by making it do multiple different taxing operations at the same time and do what you like to the programs as they run). Our programs are very demanding in terms of it takes a lot of processing to get through to the answer. But in the computer science sense, it's fairly simple stuff -- beat on nail with hammer a billion times; here's your hammer and there's the nail, go to it.
The climate science, figuring out how to design the hammer, what exactly the nail looks like, and whether it's a billion times or a trillion you have to whack on it -- that part is quite complex. So, same as you can do well in my fields with only K2-K3 levels of knowledge of computer science, computer scientists can do well in theirs with only K2-K3 knowledge of climate science (or mechanical engineering, or Thai, or Shakespeare, ...).
Again, what the most relevant expertise is depends on what question you're trying to answer or problem you're trying to solve. If you want to write a climate model, you should study a lot of climate science, and a bit of computer science. To write the whole modern model yourself, you'll want to study meteorology, oceanography, glaciology, thermodynamics, radiative transfer, fluid dynamics, turbulence, cloud physics, and at least a bit (these days) of hydrology, limnology, and a good slug of mathematics. On the computer science side, you need to learn how to write in a programming language. That's it. It would be nice to know more, as for all things. But the only thing required, from a computer science standpoint, is a programming language. No need for syntactic analysis, operating system design, or the rest of the list I gave above. Not for climate model building, that is. If you want to solve a different problem, they can be vital. (I include numerical analysis in mathematics -- the field predated the existence of electronic computers. Arguably so did computer science. But the modern field, as with modern climatology, is different than 100 years ago.)
As a group, computer scientists are properly placed in the last tier.
Once upon a time, computer scientists often had early backgrounds in natural sciences, before shifting to CMPSC, especially when there were few undergraduate CMPSC degree programs.
This is less true these days, and people so inclined can get through CMPSC degrees with less physics, math, and statistics than one would expect.
Many computer scientists would fit B3 background, K2-K3 level of knowledge on that chart I linked earlier.
On that scale, I only rate myself a K4, which corresponds roughly to Robert's Tier 5. Many CMPSC PhDs would rate no higher than K2 (or even K1, I'm afraid, on climate science).
Of course John is one who has been spending serious effort at learning the science, so although our shortcut puts him on a low tier in this area (he's high for computer science!), the earned knowledge is higher. Best, of course, is to work from the actual knowledge of the individual. On the other hand, presented a list of 60 speakers at a meeting, and seeing few from fields in the upper levels (applicable to the topic at hand), it's not a bad bet that the meeting isn't really about the science (or whatever expertise is involved).
If we're talking specifically about climate modellers, we're talking about people who use computers a lot, and make the computers run for very long periods. So, does that mean that all climate modellers are experts about computers the way that computer scientists are? Absolutely not. Again, different matters. Some climate modellers, particularly those from the early days, are quite knowledgeable about gruesome details of computer science. But, as with computer scientists and climate models, that's not the way to bet.
I'll link again to John's K-scale. A computer scientist spends most time learning about computer science. At low levels, this means things like learning programming languages, how to write simple algorithms, and the like. Move up, and a computer scientist will be learning how to write the programs that turn a program in to something the computer can actually work with (compilers), how to write the system that keeps the computer doing all the sorts of processing you want it to (operating systems), interesting (to computer scientists, at least :-) things about data structures, data bases, syntactic analysis (how to invent programming languages, among other things), abstract algorithms, and ... well probably quite a few more things. It's a long time since I was an undergraduate rooming with the teaching assistant for the operating systems class. Things have changed, I'm sure.
Anyhow, on that scale of computer science knowledge, I probably sit in the K2-K3 level. I use computers a lot. And, on the scale of things in my field, am pretty good with the computer science end of things. But, considered as matters of computer science, things like numerical weather prediction models, ice sheet models, ocean models, climate models, etc., are just not that involved. The inputs take predictable paths through the program (clouds don't get to change their mind about how they behave, unlike what happens when you're making the computer work hard by making it do multiple different taxing operations at the same time and do what you like to the programs as they run). Our programs are very demanding in terms of it takes a lot of processing to get through to the answer. But in the computer science sense, it's fairly simple stuff -- beat on nail with hammer a billion times; here's your hammer and there's the nail, go to it.
The climate science, figuring out how to design the hammer, what exactly the nail looks like, and whether it's a billion times or a trillion you have to whack on it -- that part is quite complex. So, same as you can do well in my fields with only K2-K3 levels of knowledge of computer science, computer scientists can do well in theirs with only K2-K3 knowledge of climate science (or mechanical engineering, or Thai, or Shakespeare, ...).
Again, what the most relevant expertise is depends on what question you're trying to answer or problem you're trying to solve. If you want to write a climate model, you should study a lot of climate science, and a bit of computer science. To write the whole modern model yourself, you'll want to study meteorology, oceanography, glaciology, thermodynamics, radiative transfer, fluid dynamics, turbulence, cloud physics, and at least a bit (these days) of hydrology, limnology, and a good slug of mathematics. On the computer science side, you need to learn how to write in a programming language. That's it. It would be nice to know more, as for all things. But the only thing required, from a computer science standpoint, is a programming language. No need for syntactic analysis, operating system design, or the rest of the list I gave above. Not for climate model building, that is. If you want to solve a different problem, they can be vital. (I include numerical analysis in mathematics -- the field predated the existence of electronic computers. Arguably so did computer science. But the modern field, as with modern climatology, is different than 100 years ago.)
09 September 2009
Vickie is now blogging
My wife has started, tonight, blogging. It is about her experiences volunteering at one of the few nonprofit organizations that works with prostituted women. For the adults in my audience, I strongly recommend reading. Excellent writing, and a real problem. (Yes, I'm probably biased, as Vickie will be the first to tell you. But the Maryland State Arts Council is beyond my radius of influence, and they awarded her the major first prize a couple of years ago, as did the MD Writer's Association. Read for yourself.)
Her blog is Vickie's Prostitution Blog.
I've also established a facebook group for her, 'Vickie Grumbine Writing'
Update: Vickie Grumbine Writing. Thanks thingsbreak.
Her blog is Vickie's Prostitution Blog.
I've also established a facebook group for her, 'Vickie Grumbine Writing'
Update: Vickie Grumbine Writing. Thanks thingsbreak.
08 September 2009
What fields are relevant?
I've never met someone who knew everything. Certainly I've met some very bright people, and people who knew quite a lot. But nobody has known everything. Conversely, I'm a bright guy, and know a lot of stuff, but I've never met anybody who didn't know things that I didn't. That including an 8 year old who was pointing out to me how to identify some animal tracks (they'd talked about this in her science class recently).
People know best what they've studied the most is my rule of thumb. That's why I go to a medical doctor when I'm sick, but take the dogs to a veterinarian when they're sick. I call up a plumber when the water heater needs replacing, and take my car to an auto mechanic when it needs work. And not vice versa on any of them. It might be true that the auto mechanic is also a good plumber. But, odds are, the person who focused on learning plumbing is the better plumber.
None of this should be a surprise to anybody, yet it seems in practice that it is once we come to climate. Let's be a little more specific in that -- make it the question of whether and how much human activity is affecting climate. There are many other climate questions, but it's this one that attracts the attention, and lists of people on declarations and petitions. If you look only at the people who have professionally studied the matter and contributed to our knowledge of the matter, then the answer to the question is an overwhelming 'yes', and a less overwhelming but substantial 'about half the warming of the last 50 years'.
I've tried to set up a graphic (you folks who have actual skills in graphics are invited to submit improved versions!) of 'the way to bet'. The idea is to provide a loose relative guide as to which fields most commonly have people who you can have the greatest expectations that they have studied material relevant to the question of global warming and human contributions to it from a standpoint of the natural science of the climate system.
Climatology, naturally, is on the top tier -- many people in that field will have relevant background. Not all, remember. Some climatologists look no further than their own forest (microclimatology of forests -- how the conditions in the forest differ locally from the larger scale averages) or other small area, or small time scale. Still, many will be relevant.
Second tier, fewer of the people will be climate-relevant, but still many. Oceanography, meteorology, glaciology.
Third tier, most people will not be climate-relevant. But some have made their way, at least, from those fields over to studying climate. That includes areas like Geomorphology (study of the shape of the surface of the earth) and quantum physics (the ones who come to climate were studying absorption of radiation).
Fourth tier, almost nobody is studying things relevant to the question I posed. The extremely rare exception does exist -- Judith Lean has come from astrophysics and done some good work (with David Rind, a more classically obvious climate scientist) regarding solar influence on climate. Milankovitch was an astronomer/mathematical analyst who developed an important theory of the ice ages.
Fifth tier, I don't think anybody has studied the question I posed directly. I do know a couple of nuclear physicists who have moved to climate-relevant studies. But they essentially started their careers over with some years of study to make the migration. In this, it's more a matter that they once were nuclear physicists. After some years of retraining, they finally were able to make contributions to weather and climate. At which point, really, they were meteorologists who happened to know surprisingly large amounts about nuclear physics.
Sixth tier, I wouldn't include at all except that they show up sometimes on the lists. My doctor is a good guy, bright, interested, and so on. But it takes a lot of work studying things other than climate to become a doctor, and more work after the degree is awarded to stay knowledgeable in that field. That doesn't leave a lot of time to become expert in some other highly unrelated field.
[Figure removed 14 September 2009 -- See Intro to Peer Review for details]
Suggestions of areas to add, or to move up or down, are welcome. I'm sure I have missed many fields and others are probably too high or low.
For now, though, if you're not an expert on climate yourself, I'll suggest that if the source is in the first two tiers, there's a fair chance that they've got some relevant background. If they're in the bottom 3, almost certainly not -- skip these. And the third level, is probably to skip but maybe pencil them in for later study, after you've developed more knowledge yourself from studying sources on the first two levels.
This ranking, of course, applies to the particular question asked. If the question is different, say "What are the medical effects of a warmer climate?", the pyramid would be quite different and MD's would be the top tier. Meteorology would move down one or two levels. Expertise exists only within some area. As I said, nobody knows everything.
Update:
frequent commenter jg has contributed the following graphic:
A general good change he's made is to split between general skills, that can transfer to studying climate, as well as what particular sorts of detailed skills or knowledge one might have. Almost everyone, for instance, involved in studying climate knows some statistics and mathematical analysis. Many fields also require such knowledge, so those would find it easier to move over to climate.
Different good change he made was to put the question directly into the graphic. This is important. As I said, but didn't illustrate, the priority list depends on exactly what question is at hand.
People know best what they've studied the most is my rule of thumb. That's why I go to a medical doctor when I'm sick, but take the dogs to a veterinarian when they're sick. I call up a plumber when the water heater needs replacing, and take my car to an auto mechanic when it needs work. And not vice versa on any of them. It might be true that the auto mechanic is also a good plumber. But, odds are, the person who focused on learning plumbing is the better plumber.
None of this should be a surprise to anybody, yet it seems in practice that it is once we come to climate. Let's be a little more specific in that -- make it the question of whether and how much human activity is affecting climate. There are many other climate questions, but it's this one that attracts the attention, and lists of people on declarations and petitions. If you look only at the people who have professionally studied the matter and contributed to our knowledge of the matter, then the answer to the question is an overwhelming 'yes', and a less overwhelming but substantial 'about half the warming of the last 50 years'.
I've tried to set up a graphic (you folks who have actual skills in graphics are invited to submit improved versions!) of 'the way to bet'. The idea is to provide a loose relative guide as to which fields most commonly have people who you can have the greatest expectations that they have studied material relevant to the question of global warming and human contributions to it from a standpoint of the natural science of the climate system.
Climatology, naturally, is on the top tier -- many people in that field will have relevant background. Not all, remember. Some climatologists look no further than their own forest (microclimatology of forests -- how the conditions in the forest differ locally from the larger scale averages) or other small area, or small time scale. Still, many will be relevant.
Second tier, fewer of the people will be climate-relevant, but still many. Oceanography, meteorology, glaciology.
Third tier, most people will not be climate-relevant. But some have made their way, at least, from those fields over to studying climate. That includes areas like Geomorphology (study of the shape of the surface of the earth) and quantum physics (the ones who come to climate were studying absorption of radiation).
Fourth tier, almost nobody is studying things relevant to the question I posed. The extremely rare exception does exist -- Judith Lean has come from astrophysics and done some good work (with David Rind, a more classically obvious climate scientist) regarding solar influence on climate. Milankovitch was an astronomer/mathematical analyst who developed an important theory of the ice ages.
Fifth tier, I don't think anybody has studied the question I posed directly. I do know a couple of nuclear physicists who have moved to climate-relevant studies. But they essentially started their careers over with some years of study to make the migration. In this, it's more a matter that they once were nuclear physicists. After some years of retraining, they finally were able to make contributions to weather and climate. At which point, really, they were meteorologists who happened to know surprisingly large amounts about nuclear physics.
Sixth tier, I wouldn't include at all except that they show up sometimes on the lists. My doctor is a good guy, bright, interested, and so on. But it takes a lot of work studying things other than climate to become a doctor, and more work after the degree is awarded to stay knowledgeable in that field. That doesn't leave a lot of time to become expert in some other highly unrelated field.
[Figure removed 14 September 2009 -- See Intro to Peer Review for details]
Suggestions of areas to add, or to move up or down, are welcome. I'm sure I have missed many fields and others are probably too high or low.
For now, though, if you're not an expert on climate yourself, I'll suggest that if the source is in the first two tiers, there's a fair chance that they've got some relevant background. If they're in the bottom 3, almost certainly not -- skip these. And the third level, is probably to skip but maybe pencil them in for later study, after you've developed more knowledge yourself from studying sources on the first two levels.
This ranking, of course, applies to the particular question asked. If the question is different, say "What are the medical effects of a warmer climate?", the pyramid would be quite different and MD's would be the top tier. Meteorology would move down one or two levels. Expertise exists only within some area. As I said, nobody knows everything.
Update:
frequent commenter jg has contributed the following graphic:
A general good change he's made is to split between general skills, that can transfer to studying climate, as well as what particular sorts of detailed skills or knowledge one might have. Almost everyone, for instance, involved in studying climate knows some statistics and mathematical analysis. Many fields also require such knowledge, so those would find it easier to move over to climate.
Different good change he made was to put the question directly into the graphic. This is important. As I said, but didn't illustrate, the priority list depends on exactly what question is at hand.
04 September 2009
One dimensional climate models
Some time back, I described the simplest meaningful climate model, and then gave a brief survey of the 16 climate models.
The next 4 I'll take up are the 4 1 dimensional climate models. These are the models that vary only in longitude, only time, only in latitude, or only in the vertical. It'll be in that order. This turns out to be the order of difficulty, and the order of interest. It isn't until the vertical that we'll get to how exactly it is that the greenhouse effect works.
On the other hand, with the model in latitude we'll see some powerful statements about the fact that energy has to move from the equator towards the pole. Not just the fact, but how much, and how it changes with latitude.
In the model with only time, we can look a little more at things we were thinking towards with the simplest model -- what happens if the solar output varies, or if the earth's albedo does. More is involved, and required, than just that. We'll have to start paying attention to how energy is taken up in the atmosphere, ocean, ice, and land. Not a very large amount of attention -- we can't tell the difference between the poles and the equator, or upper vs. lower atmosphere or ocean. But it's a start.
But for now, let's look at the simplest model in longitude only. As with any of our models, they start with the conservation of energy. The energy coming in is, as before, from the sun. How much energy arrives does not depend on what longitude we're at. Remember, even though the sun rises in the east and sets in the west -- east and west being matters of longitude -- the sun does eventually rise everywhere.
Energy coming in has to be balanced by energy going out. If it weren't, things would be changing over time and there is no time in this model. One part of the energy going out is the solar energy that gets bounced straight out. This fraction is called the albedo. Now albedo is something that can depend on longitude. For instance, land is more reflective than ocean. And along, say, 30 E, the earth is mostly land, while along, say 170 W, it is almost entirely ocean. Clouds can be anywhere. So ... we arrive at one of those unpleasant realities -- we have to get some data.
Normal business. The process arrives at telling us that we need to find averaged albedo over time (say some years) and all latitudes for each longitude. (We don't have to average over elevation because albedo is defined as the energy bounced out -- from whatever level of the atmosphere -- divided by the energy coming in.)
Once we have that, we can compute the temperatures at each longitude that will permit us to balance, with terrestrial radiation out, the incoming energy. These temperatures should be something like the blackbody temperature of the earth we found in the simplest model. But they'll vary some.
The next piece of data we'll need are the observed blackbody temperatures, by longitude. Then we'll compare the simplest model to the observations.
One thing which is possible, and we'll be looking for in our comparison, is that now we've added longitude, a new thing can happen. In the simplest model, the energy coming in had to be balanced, right there, by energy going out. Now that we have longitude, it's possible for energy to shift from one longitude to another. The Gulf Stream and North Atlantic Currents, for instance, move a lot of energy from west to east. If no energy is being transported, on the average, then the temperature for a longitude will be just what we expect. If there's a mismatch, energy has to be getting moved from one longitude to another.
I haven't collected the data yet, so I don't really know how it will turn out. I expect that clouds will cover the albedo differences between land and ocean to a fair extent, so the temperatures we'll compute will be fairly constant. I also expect that heat transport by longitude will be small -- the Gulf Stream's eastward warm current is balanced at least partly by a cool current (relative to local temperatures, that is!) at the equator.
On the other hand, I haven't looked at the data yet, so there is room for surprise. That'll be fun. Means we get to learn more than we expected.
The next 4 I'll take up are the 4 1 dimensional climate models. These are the models that vary only in longitude, only time, only in latitude, or only in the vertical. It'll be in that order. This turns out to be the order of difficulty, and the order of interest. It isn't until the vertical that we'll get to how exactly it is that the greenhouse effect works.
On the other hand, with the model in latitude we'll see some powerful statements about the fact that energy has to move from the equator towards the pole. Not just the fact, but how much, and how it changes with latitude.
In the model with only time, we can look a little more at things we were thinking towards with the simplest model -- what happens if the solar output varies, or if the earth's albedo does. More is involved, and required, than just that. We'll have to start paying attention to how energy is taken up in the atmosphere, ocean, ice, and land. Not a very large amount of attention -- we can't tell the difference between the poles and the equator, or upper vs. lower atmosphere or ocean. But it's a start.
But for now, let's look at the simplest model in longitude only. As with any of our models, they start with the conservation of energy. The energy coming in is, as before, from the sun. How much energy arrives does not depend on what longitude we're at. Remember, even though the sun rises in the east and sets in the west -- east and west being matters of longitude -- the sun does eventually rise everywhere.
Energy coming in has to be balanced by energy going out. If it weren't, things would be changing over time and there is no time in this model. One part of the energy going out is the solar energy that gets bounced straight out. This fraction is called the albedo. Now albedo is something that can depend on longitude. For instance, land is more reflective than ocean. And along, say, 30 E, the earth is mostly land, while along, say 170 W, it is almost entirely ocean. Clouds can be anywhere. So ... we arrive at one of those unpleasant realities -- we have to get some data.
Normal business. The process arrives at telling us that we need to find averaged albedo over time (say some years) and all latitudes for each longitude. (We don't have to average over elevation because albedo is defined as the energy bounced out -- from whatever level of the atmosphere -- divided by the energy coming in.)
Once we have that, we can compute the temperatures at each longitude that will permit us to balance, with terrestrial radiation out, the incoming energy. These temperatures should be something like the blackbody temperature of the earth we found in the simplest model. But they'll vary some.
The next piece of data we'll need are the observed blackbody temperatures, by longitude. Then we'll compare the simplest model to the observations.
One thing which is possible, and we'll be looking for in our comparison, is that now we've added longitude, a new thing can happen. In the simplest model, the energy coming in had to be balanced, right there, by energy going out. Now that we have longitude, it's possible for energy to shift from one longitude to another. The Gulf Stream and North Atlantic Currents, for instance, move a lot of energy from west to east. If no energy is being transported, on the average, then the temperature for a longitude will be just what we expect. If there's a mismatch, energy has to be getting moved from one longitude to another.
I haven't collected the data yet, so I don't really know how it will turn out. I expect that clouds will cover the albedo differences between land and ocean to a fair extent, so the temperatures we'll compute will be fairly constant. I also expect that heat transport by longitude will be small -- the Gulf Stream's eastward warm current is balanced at least partly by a cool current (relative to local temperatures, that is!) at the equator.
On the other hand, I haven't looked at the data yet, so there is room for surprise. That'll be fun. Means we get to learn more than we expected.
02 September 2009
Models and Modelling
"All models are wrong. Some models are useful." George Box
Box was a modeller, and the sentiment is widely spread among modellers of all kinds. This might be a surprise to many, who imagine that modellers think they're producing gospel. The reality is, we modellers all acknowledge the first statement. We are more interested in the second -- Some models are useful.
But let's back up a bit. What is a model? In figuring out some of this, we'll see how it is that models can be imperfect, but still useful.
There are several sorts of model, is one thing to remember. On fashion runways or covers of magazines, we'll see fashion models. In hobby shops, we can get a model spacecraft or car. We could head more towards science, and find a laboratory model, or a biological model animal, statistical model, a process model, numerical model, and so on.
Common to the models is that they have some limited purpose. A fashion model is to display some fashion to advantage -- making the dress/skirt/make up/... look good. She's not to be considered an attempt to represent all women accurately. The model spacecraft is not intended to reach the moon. But you can learn something about how a spacecraft is constructed by assembling one, and the result will look like the real thing.
In talking about a laboratory model, read that as being a laboratory experiment. You hope that the set up you arrange in the lab is an accurate representation of what you're trying to study. The lab is never exactly the real thing, but if you're trying to study, say, how much a beam flexes when a weight is put in the middle, you might be able to get pretty close. If you want to know the stability of a full-size bridge with full size beams and welds and rivets assembled by real people, it'll be more a challenge -- represent the 1000 meter bridge inside your lab that's only 10 meters long. It won't be exact, but it can be good enough. Historical note for the younger set: Major bridges like the Golden Gate Bridge, Brooklyn Bridge, Tower Bridge, and such, were designed and built based on scale models like this. The Roman Aqueducts designed over 2000 years ago, still stand, and never came near a computer. They were all derived from models, not a single one of which was entirely correct.
In studying diseases, biologists use model animals. They're real animals of course. They're being used a models to study the human disease. Lab rats and such aren't humans. But, after extensive testing was done, it was discovered that the rats for some diseases, and other animals for other diseases, reacted closely enough to how humans did. Not exactly the same. But closely enough that the early experiments and tests of early ideas could be done on the rats rather than on people. The model is wrong, but useful.
Statistical models seem to be the sort that the most people are most familiar with. My note Does CO2 correlate with temperature arrives at a statistical model, for instance -- that for each 100 ppm CO2 rises, temperature rises by 1 K. It's an only marginally useful model, but useful enough to show a connection between the two variables, and an approximate order of magnitude of the size. As I mentioned then, this is not how the climate really is modelled. A good statistical model is the relationship between exercise and heart disease. A statistical model, derived from a long term study of people over decades, showed that the probability of heart disease declined as people did more aerobic exercise. Being statistical, it can't guarantee that if you walk 5 miles a week instead of 0 you'll decrease your heart disease chances by exactly X%. But it does provide strong support that you're better off if you cover 5 miles instead of 0. Digressing a second: Same study was (and is still part of) the support of the 20-25 miles per week running or walking or equivalent (30-40 km/week) suggestion for health. The good news being that while 20 is better than 10, 10 is better than 5, and 5 is way better than 0. (As always, before starting check with your doctor about your particular situation, especially if you're older, have a history of heart problems already, or are seriously overweight). This model is wrong -- it won't tell you how much better, and in some cases your own results might be a worsening. But it's useful -- most people will be better off, many by a large amount, if they exercise.
Process models started as lab experiments, but also are done in numerical models. Either way, the method is to strip out everything in the universe except for exactly and only the thing you want to study. Galileo, in studying the motion of bodies under gravity stripped the system, and slowed it down, by going to the process model of balls rolling down sloping planes. He did not fire arrows, cannon balls, use birds, or bricks, etc.. Simplified to just the ball rolling down the plane. The model was wrong -- it excluded many forces that act on birds, bricks, and all. But it was useful -- it told him something about how gravity worked. Especially, it told him that gravity didn't care about how big the ball was, it accelerated by the same rules. In climate, we might use a process model that included only how radiation travelled up and down through the atmosphere. It would specify everything else -- the winds, clouds, where the sun was, what the temperature of the surface was, and so on. Such process models are used to try to understand, for instance, what is important about clouds -- is it the number of cloud droplets, their size, some combination, ...? As a climate model, it would be wrong. But it's useful to help us design our cloud observing systems.
Numerical models, actually we need to expand this to 'general computational models' as the statistical, process, and even some disease models now, are done as computational models. These general models attempt to model relatively thoroughly (not as a process model) much of what goes on in the system of interest. An important feature being that electronic computers are not essential. The first numerical weather prediction was done by pencil, paper, and sometimes an adding machine -- more than 25 years before the first electronic computer. Bridges, cars, and planes are now also modelled in this way, in addition or instead of scale models. Again, all of them are wrong -- they all leave out things that the real system has, or treat them in ways simpler (easier to compute) than the real thing. But all can be useful -- they let us try 'what if' experiments much faster and cheaper than building scale models. Or, in the case of climate, they make it possible to try out the 'what if' at all. We just don't have any spare planets to run experiments on.
Several sorts of models, but one underlying theme -- all wrong, but they can be useful. In coming weeks, I'll be turning to some highly simplified models for the climate. The first round will be the four 1-dimensional models. Two are not very useful at all, and two will be extremely educational. These are 4 of the 16 climate models.
Box was a modeller, and the sentiment is widely spread among modellers of all kinds. This might be a surprise to many, who imagine that modellers think they're producing gospel. The reality is, we modellers all acknowledge the first statement. We are more interested in the second -- Some models are useful.
But let's back up a bit. What is a model? In figuring out some of this, we'll see how it is that models can be imperfect, but still useful.
There are several sorts of model, is one thing to remember. On fashion runways or covers of magazines, we'll see fashion models. In hobby shops, we can get a model spacecraft or car. We could head more towards science, and find a laboratory model, or a biological model animal, statistical model, a process model, numerical model, and so on.
Common to the models is that they have some limited purpose. A fashion model is to display some fashion to advantage -- making the dress/skirt/make up/... look good. She's not to be considered an attempt to represent all women accurately. The model spacecraft is not intended to reach the moon. But you can learn something about how a spacecraft is constructed by assembling one, and the result will look like the real thing.
In talking about a laboratory model, read that as being a laboratory experiment. You hope that the set up you arrange in the lab is an accurate representation of what you're trying to study. The lab is never exactly the real thing, but if you're trying to study, say, how much a beam flexes when a weight is put in the middle, you might be able to get pretty close. If you want to know the stability of a full-size bridge with full size beams and welds and rivets assembled by real people, it'll be more a challenge -- represent the 1000 meter bridge inside your lab that's only 10 meters long. It won't be exact, but it can be good enough. Historical note for the younger set: Major bridges like the Golden Gate Bridge, Brooklyn Bridge, Tower Bridge, and such, were designed and built based on scale models like this. The Roman Aqueducts designed over 2000 years ago, still stand, and never came near a computer. They were all derived from models, not a single one of which was entirely correct.
In studying diseases, biologists use model animals. They're real animals of course. They're being used a models to study the human disease. Lab rats and such aren't humans. But, after extensive testing was done, it was discovered that the rats for some diseases, and other animals for other diseases, reacted closely enough to how humans did. Not exactly the same. But closely enough that the early experiments and tests of early ideas could be done on the rats rather than on people. The model is wrong, but useful.
Statistical models seem to be the sort that the most people are most familiar with. My note Does CO2 correlate with temperature arrives at a statistical model, for instance -- that for each 100 ppm CO2 rises, temperature rises by 1 K. It's an only marginally useful model, but useful enough to show a connection between the two variables, and an approximate order of magnitude of the size. As I mentioned then, this is not how the climate really is modelled. A good statistical model is the relationship between exercise and heart disease. A statistical model, derived from a long term study of people over decades, showed that the probability of heart disease declined as people did more aerobic exercise. Being statistical, it can't guarantee that if you walk 5 miles a week instead of 0 you'll decrease your heart disease chances by exactly X%. But it does provide strong support that you're better off if you cover 5 miles instead of 0. Digressing a second: Same study was (and is still part of) the support of the 20-25 miles per week running or walking or equivalent (30-40 km/week) suggestion for health. The good news being that while 20 is better than 10, 10 is better than 5, and 5 is way better than 0. (As always, before starting check with your doctor about your particular situation, especially if you're older, have a history of heart problems already, or are seriously overweight). This model is wrong -- it won't tell you how much better, and in some cases your own results might be a worsening. But it's useful -- most people will be better off, many by a large amount, if they exercise.
Process models started as lab experiments, but also are done in numerical models. Either way, the method is to strip out everything in the universe except for exactly and only the thing you want to study. Galileo, in studying the motion of bodies under gravity stripped the system, and slowed it down, by going to the process model of balls rolling down sloping planes. He did not fire arrows, cannon balls, use birds, or bricks, etc.. Simplified to just the ball rolling down the plane. The model was wrong -- it excluded many forces that act on birds, bricks, and all. But it was useful -- it told him something about how gravity worked. Especially, it told him that gravity didn't care about how big the ball was, it accelerated by the same rules. In climate, we might use a process model that included only how radiation travelled up and down through the atmosphere. It would specify everything else -- the winds, clouds, where the sun was, what the temperature of the surface was, and so on. Such process models are used to try to understand, for instance, what is important about clouds -- is it the number of cloud droplets, their size, some combination, ...? As a climate model, it would be wrong. But it's useful to help us design our cloud observing systems.
Numerical models, actually we need to expand this to 'general computational models' as the statistical, process, and even some disease models now, are done as computational models. These general models attempt to model relatively thoroughly (not as a process model) much of what goes on in the system of interest. An important feature being that electronic computers are not essential. The first numerical weather prediction was done by pencil, paper, and sometimes an adding machine -- more than 25 years before the first electronic computer. Bridges, cars, and planes are now also modelled in this way, in addition or instead of scale models. Again, all of them are wrong -- they all leave out things that the real system has, or treat them in ways simpler (easier to compute) than the real thing. But all can be useful -- they let us try 'what if' experiments much faster and cheaper than building scale models. Or, in the case of climate, they make it possible to try out the 'what if' at all. We just don't have any spare planets to run experiments on.
Several sorts of models, but one underlying theme -- all wrong, but they can be useful. In coming weeks, I'll be turning to some highly simplified models for the climate. The first round will be the four 1-dimensional models. Two are not very useful at all, and two will be extremely educational. These are 4 of the 16 climate models.