Wasn't planning on such a gap between posts, but life happens. In the previous note, I talked a little about the first numerical weather prediction. One of the comments (you do read the comments, I hope, lots of good material there!) mentioned an article by Peter Lynch on the first successful numerical weather prediction.
I was of two minds about that article. On the plus side, it is a well-written article about something I'm interested in. On the minus side, I was planning on writing an article much like it myself and now I can't :-( On the plus side, I got to read the article without doing all the work I'd have had to do in writing it myself. In any case, go read it!
The first successful numerical weather prediction was done using the ENIAC computer, and took almost 24 hours to make a 24 hour prediction. I learned numerical weather prediction from George Platzman, who was involved in that effort. When he learned of my interest in history, he gave me copies of his notes from his 1979 lecture, in which he'd re-created the original model. This included, for instance, a lot of work being done to scale all numbers to be a magnitude near 1. The ENIAC did fixed-point arithmetic. If the numbers weren't close to 1, the precision would have rapidly decayed.
One thing I did after programming the model was let it run out farther in to the future than the original 24 hours. The model blew up. A quick check with Platzman showed that this was no surprise. He had (40 years earlier) done the numerical analysis on the boundary conditions and showed that a) they were unstable but b) the instability was 'slow' -- would not affect the 24 hour results. My result confirmed that, though he was surprised that it started to blow up around day 3 and was useless by day 5.
The more refined test, on a model that included the same physical processes but did not include the boundary condition problem, was done by P. D. Thompson, "Uncertainty of initial state as a factor in the predictability of large scale atmospheric flow patterns", Tellus, 9, p275-295, 1957. It appears that he was the first to publish on whether there were intrinsic limits to how far ahead one could predict the weather. Up to about this time, the sentiment had been that if given enough computing power (something much larger than the WPA project Eli mentioned in the previous comments was envisioned by Richardson) and good enough data, then working out the data management (source of Richardson's problems) and numerical representation would suffice to get meaningful answers about weather indefinitely far into the future.
Thompson conducted an experiment on that presumption. Suppose that we start a weather forecast model from two slightly different initial conditions. So slightly different, in fact, that no plausible observing system would be able to tell the difference between them. Would the two forecasts also start and remain too close together for observations to tell the difference? The surprising (at that time) answer was, no. These unobservably small differences would lead to easily observed differences once you were far enough in to the forecast. Worse, 'far enough', wasn't terribly far -- only a week or two.
In 1963, while working on a highly simplified model of convection, Ed Lorenz bought dynamical chaos in to meteorology. Some of his work is described at non-professional level in James Gleick's Chaos: Making a New Science, as are some of the implications. This book was written in 1988, at the peak of the optimism about chaos. Things got messier later. But the history isn't too bad and the descriptions of chaos are helpful.
The key for the moment is that this business of initial states that are quite close to each other giving predictions that are quite different is one of the symptoms of chaos. They're also a symptom of a bad model or programming error, so you have to do some work, as Thompson and Lorenz did, to show that you're dealing with chaos rather than errors.
I started in the previous post with some comments about chaos and climate. We're about ready for those. A couple things to remember are that numerical weather prediction goes back a long way, and that from very early on there have been questions about just how far ahead we could predict weather. What is still open is what any of this -- chaos, limited predictability of weather, difficulty of writing error-free models -- means for climate. More next time.
18 February 2009
Subscribe to:
Post Comments (Atom)
2 comments:
To show how far computers have come since those days, Lynch has put the ENIAC model on a cellphone. A 24-hour forecast runs in less than a second on a Nokia 6300. The program to run the model is available to download on Lynch's site.
Dr. Grumbine,
I have a question which is off-topic, but I was hoping for a suggestion. I'm doing an undergraduate statistics project, and I want to make mine climate change-oriented. I was hoping for some useful suggestions as to what would make a good idea-- I'm hoping of doing some kind of ice core data analysis (of some variable). My background in statistics is confined to a beginner class, but I do have math background out to differential equations and calc courses, so I can pick up.
I ask because stats in climate can get ugly pretty quick...principal component analysis and all that stuff is a bit beyond me. I still want something that is practical and informative. I can do the research in the climate literature and understand the terminogy; my barrier is moreso in data analysis background.
Post a Comment