My son (step son to be technical) is now published in book form. The first of, I expect, many. _The Machine: A Field Guide to the Resurgent Right_, Lee Fang. It is available on Amazon. The publisher is www.thenewpress.com
In keeping with my blog, this is a research book. For Lee, that means research on money and messaging in politics. The text is good and readable. And there are plenty of citations to support the points in the text. As always, follow the citations.
Another point for the book is that in several cases, Lee is the one who did the original research. One of the more amusing parts is that some of that research, an interview with one of the Koch brothers, was played as part of an episode the Newsroom.
So yes, buy my sons book. But do so because it is well-researched and will shed much light on how US politics is now run.
25 February 2013
11 February 2013
Question Place
Ok, looks like life is moving in a more blog-friendly way. So I'll hang out the shingle again for your questions, suggestions, comments.
In the mean time, I'll note a few additions and losses. Among the losses is that the blogger widget for showing the 10 most recent comments is broken. You can still subscribe to the comments. My notes about sites which seem to be inactive, but which have material you can still read and is worth reading, are now a 'page' -- one of the tabs near the top of the page, so a loss and an addition.
Also added to the tabs near the top are "The Simplest Climate Model", and "What is Climate?", which collect in one place, finally, the several posts I've made on each topic. There'll be more.
In the mean time, I'll note a few additions and losses. Among the losses is that the blogger widget for showing the 10 most recent comments is broken. You can still subscribe to the comments. My notes about sites which seem to be inactive, but which have material you can still read and is worth reading, are now a 'page' -- one of the tabs near the top of the page, so a loss and an addition.
Also added to the tabs near the top are "The Simplest Climate Model", and "What is Climate?", which collect in one place, finally, the several posts I've made on each topic. There'll be more.
07 February 2013
Time to go do some science
NOAA/NWS/NCEP/Environmental Modeling Center
has a request for data out, one which gives anyone near water who can read a thermometer a chance to do some science. There's a science history behind why this is a request and I'll give my own, biased, view of that.
All data have errors and are messy. Though George Box's comment is often repeated at modelers ("All models are wrong, some models are useful.") it is equally applicable to data and data analysis. All data are wrong, some data are useful.
In the case of sea surface temperatures (sst), efforts to analyze global ocean sst started with badly distributed data sources -- ships. They give you a fair idea of what the temperature is along the paths the ships take. So the ship route between New York and London is pretty well-observed by ship and has been for a long time. But not many ships go through the south Pacific towards Antarctica. If you want to know what's happening down there, you need a different data source. One such is buoys. Though, again, buoys are distributed in a biased way, being mostly towards shore (so that they can be maintained and repaired).
Then came satellites and all was good, eventually, for a while. Polar orbiting satellites see the entire globe. Starting with instruments launched in the early 1980s, it has been possible to make pretty good analyses of global sst, at least on grid cells 50-200 km on a side. Since that is as good or better than any of the ship+buoy analyses could do, that was a great triumph. The ship and buoy data, though, remained and remains important. One of the problems that satellite information faces is that the instruments can 'drift', that is, read progressively too warm, or too cold. To counter that possibility and other issues, the surface data (in situ data) is used as a reference. So for a time in, say, the early 2000s, all was good.
But both scientists and users of scientific information are never satisfied for long. For sst, some of the users are fishermen -- some fish have very particular temperature preferences. As it became possible to do a pretty good global 50 km analysis, with new data over about 2/3rds of the ocean every day, scientists and users started demanding more frequent updates of information, and on a finer grid. They also got increasingly annoyed about the parts of the ocean that only got new observations every 5-20 days. This includes areas like the Gulf Stream, where it is often cloudy for extended periods. The traditional satellites are great, but don't see through cloud.
Another major user of sst information is numerical weather prediction. When weather models were using cells 80-200 km on a side, the sst at 100 km (say) was a pretty good match. But weather models continued to push to higher resolution, so that by the early 2000s, 10 km grids weren't unheard of. The reason for such small grid spacing in weather prediction models is that weather 'cares' about events at very small scale. If weather cares about those smaller scales, then it become important to provide information about sst at the smaller scales. An inadvertent proof of that was made when a model made a bad forecast for a December 2000 storm, and the cause was traced back to an sst analysis that was too coarse. See Thiebaux and others, 2003 for the full analysis.
Plus, of course, there is interesting oceanography that requires much finer scale observations than 100 km. So a couple of different efforts developed. One was to use microwave data to derive sea surface temperatures. AMSR-E was the first microwave instrument used for sst in operations, as far as I know. (Sea ice isn't the only thing that you can see with microwaves!) That addressed the issue of seeing the Gulf Stream (and other cloudy areas) most days. The other was to start pushing for higher resolution sst analysis. This lead to an international effort to analyze the global ocean at high (say 25 km and finer, sometimes 10 km and finer) grid spacing. More is involved in that than just changing a parameter in the program. (You'll get an answer if you do that, but it won't be as good as you had at the coarser grid spacing).
On the ocean side, this quality of the high resolution analyses is doing relatively well. But as you go to finer grid spacings, new matters appear. The Great Lakes are very large, so they can be seen by satellite easily, and they have buoy data through at least part of the year, so that the satellite observations can be corrected at need. But ... go to a finer grid spacing weather model and you discover that there are a lot of lakes smaller than the Great Lakes. For a 4 km model, there are some thousands of lakes just in North America. And none of them have buoys, and almost none even have climatologies. Also at this grid spacing, you start seeing the wider parts of rivers.
Here's where an opportunity arises for people who live near a shore (whether river, lake, or ocean). NOAA/NWS/NCEP/Environmental Modeling Center is requesting observations of water surface temperatures to use as a check on their analysis of temperatures in areas close to shore. (close equals, say, up to 50 km (30 miles), and at least 400m (quarter mile) from shore). Check out the project's web page at Near Shore Lake Project
As always, I don't speak for my employer or any groups I might be a member of. I'm pretty certain that all people who work on sst would disagree with at least parts of my above mini-history. Be that as it may, it should be a fun project.
All data have errors and are messy. Though George Box's comment is often repeated at modelers ("All models are wrong, some models are useful.") it is equally applicable to data and data analysis. All data are wrong, some data are useful.
In the case of sea surface temperatures (sst), efforts to analyze global ocean sst started with badly distributed data sources -- ships. They give you a fair idea of what the temperature is along the paths the ships take. So the ship route between New York and London is pretty well-observed by ship and has been for a long time. But not many ships go through the south Pacific towards Antarctica. If you want to know what's happening down there, you need a different data source. One such is buoys. Though, again, buoys are distributed in a biased way, being mostly towards shore (so that they can be maintained and repaired).
Then came satellites and all was good, eventually, for a while. Polar orbiting satellites see the entire globe. Starting with instruments launched in the early 1980s, it has been possible to make pretty good analyses of global sst, at least on grid cells 50-200 km on a side. Since that is as good or better than any of the ship+buoy analyses could do, that was a great triumph. The ship and buoy data, though, remained and remains important. One of the problems that satellite information faces is that the instruments can 'drift', that is, read progressively too warm, or too cold. To counter that possibility and other issues, the surface data (in situ data) is used as a reference. So for a time in, say, the early 2000s, all was good.
But both scientists and users of scientific information are never satisfied for long. For sst, some of the users are fishermen -- some fish have very particular temperature preferences. As it became possible to do a pretty good global 50 km analysis, with new data over about 2/3rds of the ocean every day, scientists and users started demanding more frequent updates of information, and on a finer grid. They also got increasingly annoyed about the parts of the ocean that only got new observations every 5-20 days. This includes areas like the Gulf Stream, where it is often cloudy for extended periods. The traditional satellites are great, but don't see through cloud.
Another major user of sst information is numerical weather prediction. When weather models were using cells 80-200 km on a side, the sst at 100 km (say) was a pretty good match. But weather models continued to push to higher resolution, so that by the early 2000s, 10 km grids weren't unheard of. The reason for such small grid spacing in weather prediction models is that weather 'cares' about events at very small scale. If weather cares about those smaller scales, then it become important to provide information about sst at the smaller scales. An inadvertent proof of that was made when a model made a bad forecast for a December 2000 storm, and the cause was traced back to an sst analysis that was too coarse. See Thiebaux and others, 2003 for the full analysis.
Plus, of course, there is interesting oceanography that requires much finer scale observations than 100 km. So a couple of different efforts developed. One was to use microwave data to derive sea surface temperatures. AMSR-E was the first microwave instrument used for sst in operations, as far as I know. (Sea ice isn't the only thing that you can see with microwaves!) That addressed the issue of seeing the Gulf Stream (and other cloudy areas) most days. The other was to start pushing for higher resolution sst analysis. This lead to an international effort to analyze the global ocean at high (say 25 km and finer, sometimes 10 km and finer) grid spacing. More is involved in that than just changing a parameter in the program. (You'll get an answer if you do that, but it won't be as good as you had at the coarser grid spacing).
On the ocean side, this quality of the high resolution analyses is doing relatively well. But as you go to finer grid spacings, new matters appear. The Great Lakes are very large, so they can be seen by satellite easily, and they have buoy data through at least part of the year, so that the satellite observations can be corrected at need. But ... go to a finer grid spacing weather model and you discover that there are a lot of lakes smaller than the Great Lakes. For a 4 km model, there are some thousands of lakes just in North America. And none of them have buoys, and almost none even have climatologies. Also at this grid spacing, you start seeing the wider parts of rivers.
Here's where an opportunity arises for people who live near a shore (whether river, lake, or ocean). NOAA/NWS/NCEP/Environmental Modeling Center is requesting observations of water surface temperatures to use as a check on their analysis of temperatures in areas close to shore. (close equals, say, up to 50 km (30 miles), and at least 400m (quarter mile) from shore). Check out the project's web page at Near Shore Lake Project
As always, I don't speak for my employer or any groups I might be a member of. I'm pretty certain that all people who work on sst would disagree with at least parts of my above mini-history. Be that as it may, it should be a fun project.
Subscribe to:
Posts (Atom)