Tuesday, February 24, 2009

Science and media

I've been spending more time commenting on other blogs, including The Intersection lately than constructing my own posts here. Latest bit being about science, scientists, journalists, and science journalists. I'll leave my comments over there to stand in their context.

But here, I'll tell the story of my one encounter with major media. Unlike standard stereotypes, for both of the scientist and the journalist, I think, it was a very successful and positive encounter. Jack Williams (then and still at USA Today) called me up at work with a couple simple questions about my area, and hoping that I could provide some graphics. I answered (life was simpler back then) and could indeed provide the data (my graphics then as now not being a strength). Towards the end of the call, though, he also asked the good journalistic question of what the topic we were talking about might affect that readers would care about.

I'll take credit for not doing the annoying scientist thing of diving in to obscurities of what I liked personally on the belief that of course anything I was interested in would be interesting to everybody (that's what a blog is for :-). But, in truth, there were plenty of reasons for readers to be interested. So we talked some more, he followed up, and so forth -- to the point that he took the story, as it had developed, to the science page editor for consideration as a feature story. The original idea was a little weather box, not a big science page spread.

More good news followed, as the science editor approved the story as a feature. So more telephone calls as he pursued understanding of the science, and asked for more people he could talk to about various parts of the subject. (I recently ran across my old notes, including a fax I'd sent him to ensure that he had exactly the right figure in mind.) He also did a fair amount of checking with me that he was representing the science correctly -- "If I said it this way, would it still be correct?" He pretty much always did have a good rephrase, but we iterated some on a couple bits.

It's a long time since that happened, but I still remember it fondly.

On the journalist side, what I think helped it work was he:
  • was willing to follow where the science led (no preset 'story' to force fit things into)
  • was obviously working to make sure he communicated the science correctly
  • did homework to develop his own understanding of the science
On the scientist side:
  • was willing to let things be rephrased to the audience
  • was meeting the journalist's questions, not trying to drive a conclusion
  • was trying to keep in mind what job the journalist was trying to accomplish (vs. trying to turn USA Today in to an AMS or AGU journal)

Not that I did a perfect job, nor him. But, then, perfect doesn't happen often. I do think that anyone reading that article came away understanding more about the science than they walked in knowing, me included.

While I'm sure that not all journalists take the care that Williams did, and that many scientists are harder to talk to than I was (and some much better), I do feel some confidence that science journalists and scientists don't have to be nearly as at cross purposes as often works out.

Sunday, February 22, 2009

Ice core project

From Bart:
Dr. Grumbine,

I have a question which is off-topic, but I was hoping for a suggestion. I'm doing an undergraduate statistics project, and I want to make mine climate change-oriented. I was hoping for some useful suggestions as to what would make a good idea-- I'm hoping of doing some kind of ice core data analysis (of some variable). My background in statistics is confined to a beginner class, but I do have math background out to differential equations and calc courses, so I can pick up.

I ask because stats in climate can get ugly pretty quick...principal component analysis and all that stuff is a bit beyond me. I still want something that is practical and informative. I can do the research in the climate literature and understand the terminogy; my barrier is moreso in data analysis background.


I'll throw this open also for ideas from readers. First step, I think, in a science project is to get a lot of ideas together.

Some good books to read for the statistics side of things are:
How to Lie With Statistics by Darrell Huff
Lady Luck by Warren Weaver
The Visual Display of Information by Edward Tufte

On the ice core side:
The Two Mile Time Machine by Richard Alley
Ice Ages: Solving the Mystery by John and Katherine Palmer Imbrie

Most relevant to your specific thoughts is Alley's book. But read the others when you have time.

For projects ...
Well, one part would be to start looking at the ice core literature and see what suggests itself. Starting with the Vostok ice cores from the mid-1980s there has been a lot of publication on these in Science and Nature (easy journals to get hold of). For doing this, it's good to think a bit about what kinds of analysis techniques you have from your stats class, and then see what has been published on that sort of analysis. While it's likely (but not guaranteed!) that the analysis has been done before, just not published, it's also possible that it hasn't been done.

I'd stay away from the more hard core time series analysis side of things here as this is your first stats class and the ice core time series are not on the more straightforward side of that problem.

But it should be doable to have a look at the ice cores for the 'climate' lengths of time in the way I was describing, casually, in how to decide climate trends and finding climate means. The ice cores have coarser resolution, but this is made up by the greater time spans they cover. An important part of the decision that I described in the means note is that if you go longer (than 30 years, in that case) new processes might start acting. In that case, you want some even longer averaging period -- long enough to average out those processes that are, say, 50-200 years long, and short enough to not be much affected by the ones 1500-2500 years long. But what is that period? Is it the same for all variables (CO2, Salt, dust, ...)? Is it the same for glacials and interglacials?

The hard core part of the project is to go beyond what I did, which was only eyeball verification. To do that, you want to find appropriate statistical tests to determine that, for instance, the mean as computed with a 300 year span of data is essentially the same (and what would 'essentially the same' mean in statistics?) as for 400 years, but both are meaningfully different from the means computed for 200 or 600 years.

Similarly, how can you tell that the means for glacial period are different than means for interglacials? Better, how can you determine objectively from the data just when the glacials and interglacials start and end?

Ice core data is archived at the National Geophysical Data Center. You'll also have to do some thinking about which data to use, how. The longest time series come from Antarctica, but they also have the worst time resolution. Better resolution comes from Greenland, but it's shorter. Better time resolution still comes from Andean and Himalayan glaciers, but they're much shorter.

The question of what are the climate periods in ice cores is an open one in the professional literature. At least as of fall 2007 when an ice core person was asking me how sea ice people selected their climate period, and I asked her about ice core people did.

Good luck on whatever version of project you do. And do let us know how it turns out!

Wednesday, February 18, 2009

First successful numerical weather prediction

Wasn't planning on such a gap between posts, but life happens. In the previous note, I talked a little about the first numerical weather prediction. One of the comments (you do read the comments, I hope, lots of good material there!) mentioned an article by Peter Lynch on the first successful numerical weather prediction.

I was of two minds about that article. On the plus side, it is a well-written article about something I'm interested in. On the minus side, I was planning on writing an article much like it myself and now I can't :-( On the plus side, I got to read the article without doing all the work I'd have had to do in writing it myself. In any case, go read it!

The first successful numerical weather prediction was done using the ENIAC computer, and took almost 24 hours to make a 24 hour prediction. I learned numerical weather prediction from George Platzman, who was involved in that effort. When he learned of my interest in history, he gave me copies of his notes from his 1979 lecture, in which he'd re-created the original model. This included, for instance, a lot of work being done to scale all numbers to be a magnitude near 1. The ENIAC did fixed-point arithmetic. If the numbers weren't close to 1, the precision would have rapidly decayed.

One thing I did after programming the model was let it run out farther in to the future than the original 24 hours. The model blew up. A quick check with Platzman showed that this was no surprise. He had (40 years earlier) done the numerical analysis on the boundary conditions and showed that a) they were unstable but b) the instability was 'slow' -- would not affect the 24 hour results. My result confirmed that, though he was surprised that it started to blow up around day 3 and was useless by day 5.

The more refined test, on a model that included the same physical processes but did not include the boundary condition problem, was done by P. D. Thompson, "Uncertainty of initial state as a factor in the predictability of large scale atmospheric flow patterns", Tellus, 9, p275-295, 1957. It appears that he was the first to publish on whether there were intrinsic limits to how far ahead one could predict the weather. Up to about this time, the sentiment had been that if given enough computing power (something much larger than the WPA project Eli mentioned in the previous comments was envisioned by Richardson) and good enough data, then working out the data management (source of Richardson's problems) and numerical representation would suffice to get meaningful answers about weather indefinitely far into the future.

Thompson conducted an experiment on that presumption. Suppose that we start a weather forecast model from two slightly different initial conditions. So slightly different, in fact, that no plausible observing system would be able to tell the difference between them. Would the two forecasts also start and remain too close together for observations to tell the difference? The surprising (at that time) answer was, no. These unobservably small differences would lead to easily observed differences once you were far enough in to the forecast. Worse, 'far enough', wasn't terribly far -- only a week or two.

In 1963, while working on a highly simplified model of convection, Ed Lorenz bought dynamical chaos in to meteorology. Some of his work is described at non-professional level in James Gleick's Chaos: Making a New Science, as are some of the implications. This book was written in 1988, at the peak of the optimism about chaos. Things got messier later. But the history isn't too bad and the descriptions of chaos are helpful.

The key for the moment is that this business of initial states that are quite close to each other giving predictions that are quite different is one of the symptoms of chaos. They're also a symptom of a bad model or programming error, so you have to do some work, as Thompson and Lorenz did, to show that you're dealing with chaos rather than errors.

I started in the previous post with some comments about chaos and climate. We're about ready for those. A couple things to remember are that numerical weather prediction goes back a long way, and that from very early on there have been questions about just how far ahead we could predict weather. What is still open is what any of this -- chaos, limited predictability of weather, difficulty of writing error-free models -- means for climate. More next time.

Tuesday, February 17, 2009

Neither politicians nor political commentators

Neither politicians nor political commentators are a reliable source for your scientific information. I've been reminded of this yet again as a political commentator has decided to abuse information from the Cryosphere Today and repeat a lie he's used before. The commentator is George Will (Washington Post 15 Feb 2009, page B07).

He repeats the lie about the experts calling for global cooling in the 1970s. One can see from his article itself that he's not looking to the experts -- he's referring to newspapers and other non-science sources. What the experts did have to say was generally that they expected a warming -- if you look at what papers they were publishing in the scientific literature. William Connolley has been pursuing this question for years, including a paper with Tom Peterson and John Fleck in the Bulletin of the American Meteorological Society The Myth of the 1970s Global Cooling Scientific Consensus. (September 2008 issue).

Nearer to my heart, professionally speaking at least, is he joined in the abuse of Cryosphere Today. I do know the fellow who runs it. (You're not surprised, I hope. The field isn't very large, so most of us know most of the rest of us.) He does good work and provides the public service of presenting his work daily. Downside is that he likely gets much more attention for the mentions by a political commentator than he does for scientific interest, which can be more than a bit frustrating.

Anyhow, Will opines "According to the University of Illinois' Arctic Climate Research Center, global sea ice levels now equal those of 1979. " Except, if you go to the Crysophere Today, you'll see that the global figures are (or were at the time Will's column appeared) well below 1979. For a brief period at the end of last year, the statement wasn't relentlessly false. But, as tamino showed at Open Mind, the statement was still terribly false. This is also the error I was taking Roger Pielke, Sr. to task for last October.

Probably few people who care about the science and what we really understand actually get their information from political commentators. Still, I'm depressed a widespread, influential source, Will in this case, which is given inappropriate credence, and which is so blindly disregarding reality.

Back to real science tomorrow!

Monday, February 2, 2009

Start of Numerical Weather Prediction

I'm going to wind up with chaos and climate, but the route starts with numerical weather prediction. Numerical weather prediction itself starts much farther back than most people realize -- now about 90 years old. And it didn't start with the simplest possible weather prediction model. If you're not up on your history of science and technology, you didn't bat an eyelash at my mention of numerical weather prediction being 90 years old. Electronic computers are only 60-70. 90+ years ago, when the first numerical weather prediction (NWP) was done, 'computer' meant a person who, with pen and paper, slogged through the calculations.

The first NWP was performed by hand, by Lewis Frye Richardson. He did so in between ambulance runs during World War I. The reason he was in an ambulance, rather than the much safer trenches, is that he was a conscientious objector to warfare (Society of Friends (Quaker) by religion). Nevertheless, he did survive the war and completed his numerical prediction. It was finally published in 1922, with it having been essentially completed in 1919.

The model he used was what we now call a primitive equation model. It took the laws of conservation of mass, energy, and momentum in their full complexity and tried to solve them. The first successful numerical weather prediction was not made until about 1948, published in 1950: Charney, J. G, R. Fjortoft, and J. von Neumann, "Numerical Integration of the Barotropic Vorticity Equation", Tellus, 4, 237-254, 1950. It was done on a much simpler equation -- bearing much the same resemblance to the primitive equations as the simplest climate model I've mentioned before does to a full complexity climate model. This model (which Richardson could have done by hand more easily than what he took on) was run on one of the first electronic computers -- ENIAC.

The first model to implement something comparable to what Richardson tackled was not done for another 2 decades after the ENIAC model (the '6 layer PE' model Shuman, F. G., and J. B. Hovermale, An Operational Six Layer Primitive Equation Model", J. Applied Meteorology, 7, 525-547, 1968).

In any case, Richardson's forecast has often been called a glorious failure in the ensuing decades. The failure part being that the forecast was so drastically in error -- predicting a surface pressure change of over 100 mb, when only 6 or so would have been considered large. He did recognize the likely sources of his problem, but doing this kind of computation by hand was too expensive (in time) to do multiple trials to nail down exactly which was the source of his problem and which idea for repair would take care of his problem.

The glorious aspect is his enduring contribution to the field. For starters, he invented numerical weather prediction. Much of what is done today is still dependent on approaches he invented. He also foresaw massively parallel processing, and one of the central problems in that -- making sure that your different processing 'nodes' (people, in his case, computer processors in ours) remained synchronized.

For more on Richardson's forecast, see especially Weather Prediction by Numerical Process, by Lewis F. Richardson. Original publication in 1922, republished in 1965 by Dover Publications. This is the original, full, document. More recent consideration was made by G. W. Platzman, "Richardson's Weather Prediction", Bull. Amer. Meteor. Soc., 60, 302-312, 1968. Platzman includes discussion of the sources of Richardson's problems. There is a later note in BAMS by Platzman giving more considerations and ideas.

Oceanography as a job

While it's true that I sometimes whine about parts of my job (mostly, not being able to go do it because I have to do something else instead ... still ...), there's no way I agree with what Popular Science apparently said:
http://rabett.blogspot.com/2009/01/how-did-we-miss-this-from-popular.html
making oceanography one of the worst jobs in science. I did respond over there, and this isn't entirely different from my comments there. But, it's perhaps more germane than a number of things I've commented on lately. So, onward.

I think science is one of the best fields of all to enter for a career. It happens that my main science is oceanography, particularly physical oceanography. But if someone told me that I had to be an astrophysicist instead, I'd be quite happy still. (Ditto quite a number of other fields.) Getting a job can be quite difficult. But if you do get one ... well, here's what I look at in going to work each day:

I get to work with a bunch of very bright people (easy to talk with, whether about the oncoming weather (which isn't as casual for my crowd as for most), or about the Cubs (ok, not so many here are Cubs fans)).

I get paid a comfortable salary. I'm no threat to a pro sports player, particularly not a star. But I can afford to live ok (by my standards) in the Washington DC area.

I don't have to do any heavy lifting or work with obnoxious chemicals. Some scientists do, and those aren't my fields. (But, for those who like it, hey, there's a field of sixteen where it'll be needed.)

I get to find out about what happening in the world before almost anybody. At least for the sorts of the things that I work with. And that means almost anybody. For the sorts of things I work on, we're down to an easily counted number of places. Probably won't need to take off your shoes to do the counting.

When I'm doing science at my work (which isn't always, but does happen) it means that I help make a contribution to what it is that anybody in the world knows about the universe. How cool is that?! Nobody understood or thought that idea before me.

When I'm doing engineering at my job (which is more common), it means that I've succeeded in taking some of our understanding of the universe (anybody, anywhere, any time) and managed, for the first time, to make it possible for someone else to get a practical benefit from it. In my area, 'practical benefit' has included "They Don't Die."

I'll probably add to this list, and invite other scientists to do so. The main thing is, science is a wonderful area to work in. At base, and not already listed: You get paid to do something that you like to be doing.